Jan 22 10:25:19 crc systemd[1]: Starting Kubernetes Kubelet... Jan 22 10:25:19 crc restorecon[4687]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:19 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 10:25:20 crc restorecon[4687]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 10:25:20 crc restorecon[4687]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 22 10:25:20 crc kubenswrapper[4752]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 10:25:20 crc kubenswrapper[4752]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 22 10:25:20 crc kubenswrapper[4752]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 10:25:20 crc kubenswrapper[4752]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 10:25:20 crc kubenswrapper[4752]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 22 10:25:20 crc kubenswrapper[4752]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.911930 4752 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915176 4752 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915198 4752 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915203 4752 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915209 4752 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915215 4752 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915221 4752 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915226 4752 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915231 4752 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915236 4752 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915241 4752 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915248 4752 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915253 4752 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915258 4752 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915263 4752 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915268 4752 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915272 4752 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915277 4752 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915282 4752 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915286 4752 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915293 4752 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915297 4752 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915302 4752 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915306 4752 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915311 4752 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915315 4752 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915321 4752 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915326 4752 feature_gate.go:330] unrecognized feature gate: Example Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915330 4752 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915335 4752 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915340 4752 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915344 4752 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915349 4752 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915353 4752 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915359 4752 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915365 4752 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915372 4752 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915378 4752 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915384 4752 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915390 4752 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915396 4752 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915402 4752 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915408 4752 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915414 4752 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915428 4752 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915433 4752 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915438 4752 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915442 4752 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915447 4752 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915452 4752 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915456 4752 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915461 4752 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915465 4752 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915470 4752 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915474 4752 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915479 4752 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915484 4752 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915489 4752 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915493 4752 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915500 4752 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915505 4752 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915509 4752 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915514 4752 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915518 4752 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915523 4752 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915527 4752 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915533 4752 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915537 4752 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915543 4752 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915549 4752 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915556 4752 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.915560 4752 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915767 4752 flags.go:64] FLAG: --address="0.0.0.0" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915780 4752 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915789 4752 flags.go:64] FLAG: --anonymous-auth="true" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915794 4752 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915800 4752 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915804 4752 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915810 4752 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915816 4752 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915821 4752 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915825 4752 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915829 4752 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915833 4752 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915838 4752 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915842 4752 flags.go:64] FLAG: --cgroup-root="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915846 4752 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915850 4752 flags.go:64] FLAG: --client-ca-file="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915870 4752 flags.go:64] FLAG: --cloud-config="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915875 4752 flags.go:64] FLAG: --cloud-provider="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915879 4752 flags.go:64] FLAG: --cluster-dns="[]" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915886 4752 flags.go:64] FLAG: --cluster-domain="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915891 4752 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915895 4752 flags.go:64] FLAG: --config-dir="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915899 4752 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915904 4752 flags.go:64] FLAG: --container-log-max-files="5" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915910 4752 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915915 4752 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915919 4752 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915924 4752 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915928 4752 flags.go:64] FLAG: --contention-profiling="false" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915932 4752 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915937 4752 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915941 4752 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915946 4752 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915951 4752 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915956 4752 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915960 4752 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915964 4752 flags.go:64] FLAG: --enable-load-reader="false" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915969 4752 flags.go:64] FLAG: --enable-server="true" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915973 4752 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915979 4752 flags.go:64] FLAG: --event-burst="100" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915984 4752 flags.go:64] FLAG: --event-qps="50" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915988 4752 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915993 4752 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.915997 4752 flags.go:64] FLAG: --eviction-hard="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916003 4752 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916007 4752 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916012 4752 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916016 4752 flags.go:64] FLAG: --eviction-soft="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916021 4752 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916025 4752 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916029 4752 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916034 4752 flags.go:64] FLAG: --experimental-mounter-path="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916038 4752 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916042 4752 flags.go:64] FLAG: --fail-swap-on="true" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916046 4752 flags.go:64] FLAG: --feature-gates="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916051 4752 flags.go:64] FLAG: --file-check-frequency="20s" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916057 4752 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916061 4752 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916066 4752 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916070 4752 flags.go:64] FLAG: --healthz-port="10248" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916074 4752 flags.go:64] FLAG: --help="false" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916079 4752 flags.go:64] FLAG: --hostname-override="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916083 4752 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916087 4752 flags.go:64] FLAG: --http-check-frequency="20s" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916092 4752 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916097 4752 flags.go:64] FLAG: --image-credential-provider-config="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916101 4752 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916105 4752 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916110 4752 flags.go:64] FLAG: --image-service-endpoint="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916114 4752 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916119 4752 flags.go:64] FLAG: --kube-api-burst="100" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916123 4752 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916128 4752 flags.go:64] FLAG: --kube-api-qps="50" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916132 4752 flags.go:64] FLAG: --kube-reserved="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916136 4752 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916140 4752 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916144 4752 flags.go:64] FLAG: --kubelet-cgroups="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916148 4752 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916152 4752 flags.go:64] FLAG: --lock-file="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916157 4752 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916161 4752 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916165 4752 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916172 4752 flags.go:64] FLAG: --log-json-split-stream="false" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916176 4752 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916181 4752 flags.go:64] FLAG: --log-text-split-stream="false" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916185 4752 flags.go:64] FLAG: --logging-format="text" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916190 4752 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916194 4752 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916198 4752 flags.go:64] FLAG: --manifest-url="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916202 4752 flags.go:64] FLAG: --manifest-url-header="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916208 4752 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916213 4752 flags.go:64] FLAG: --max-open-files="1000000" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916218 4752 flags.go:64] FLAG: --max-pods="110" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916223 4752 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916227 4752 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916231 4752 flags.go:64] FLAG: --memory-manager-policy="None" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916236 4752 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916240 4752 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916244 4752 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916248 4752 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916259 4752 flags.go:64] FLAG: --node-status-max-images="50" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916264 4752 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916268 4752 flags.go:64] FLAG: --oom-score-adj="-999" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916272 4752 flags.go:64] FLAG: --pod-cidr="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916276 4752 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916282 4752 flags.go:64] FLAG: --pod-manifest-path="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916286 4752 flags.go:64] FLAG: --pod-max-pids="-1" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916291 4752 flags.go:64] FLAG: --pods-per-core="0" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916295 4752 flags.go:64] FLAG: --port="10250" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916299 4752 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916303 4752 flags.go:64] FLAG: --provider-id="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916307 4752 flags.go:64] FLAG: --qos-reserved="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916311 4752 flags.go:64] FLAG: --read-only-port="10255" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916316 4752 flags.go:64] FLAG: --register-node="true" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916320 4752 flags.go:64] FLAG: --register-schedulable="true" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916325 4752 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916332 4752 flags.go:64] FLAG: --registry-burst="10" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916336 4752 flags.go:64] FLAG: --registry-qps="5" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916340 4752 flags.go:64] FLAG: --reserved-cpus="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916345 4752 flags.go:64] FLAG: --reserved-memory="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916350 4752 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916354 4752 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916358 4752 flags.go:64] FLAG: --rotate-certificates="false" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916362 4752 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916367 4752 flags.go:64] FLAG: --runonce="false" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916371 4752 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916375 4752 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916380 4752 flags.go:64] FLAG: --seccomp-default="false" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916385 4752 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916390 4752 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916394 4752 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916398 4752 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916403 4752 flags.go:64] FLAG: --storage-driver-password="root" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916407 4752 flags.go:64] FLAG: --storage-driver-secure="false" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916411 4752 flags.go:64] FLAG: --storage-driver-table="stats" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916415 4752 flags.go:64] FLAG: --storage-driver-user="root" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916419 4752 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916424 4752 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916428 4752 flags.go:64] FLAG: --system-cgroups="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916432 4752 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916439 4752 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916443 4752 flags.go:64] FLAG: --tls-cert-file="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916447 4752 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916452 4752 flags.go:64] FLAG: --tls-min-version="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916456 4752 flags.go:64] FLAG: --tls-private-key-file="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916460 4752 flags.go:64] FLAG: --topology-manager-policy="none" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916465 4752 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916473 4752 flags.go:64] FLAG: --topology-manager-scope="container" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916477 4752 flags.go:64] FLAG: --v="2" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916483 4752 flags.go:64] FLAG: --version="false" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916490 4752 flags.go:64] FLAG: --vmodule="" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916496 4752 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916500 4752 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916623 4752 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916629 4752 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916635 4752 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916639 4752 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916643 4752 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916648 4752 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916651 4752 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916655 4752 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916659 4752 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916663 4752 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916667 4752 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916671 4752 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916675 4752 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916679 4752 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916683 4752 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916687 4752 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916691 4752 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916694 4752 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916698 4752 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916702 4752 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916706 4752 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916710 4752 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916713 4752 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916717 4752 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916720 4752 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916724 4752 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916729 4752 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916733 4752 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916737 4752 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916740 4752 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916744 4752 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916747 4752 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916751 4752 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916754 4752 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916758 4752 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916761 4752 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916765 4752 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916768 4752 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916773 4752 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916777 4752 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916781 4752 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916785 4752 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916789 4752 feature_gate.go:330] unrecognized feature gate: Example Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916793 4752 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916798 4752 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916803 4752 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916807 4752 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916813 4752 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916817 4752 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916822 4752 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916826 4752 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916830 4752 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916835 4752 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916840 4752 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916844 4752 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916848 4752 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916856 4752 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916875 4752 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916882 4752 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916886 4752 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916891 4752 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916895 4752 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916900 4752 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916904 4752 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916908 4752 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916911 4752 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916915 4752 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916919 4752 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916923 4752 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916927 4752 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.916930 4752 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.916937 4752 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.932798 4752 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.932914 4752 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933128 4752 feature_gate.go:330] unrecognized feature gate: Example Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933160 4752 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933172 4752 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933184 4752 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933195 4752 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933205 4752 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933216 4752 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933227 4752 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933238 4752 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933248 4752 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933258 4752 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933269 4752 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933279 4752 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933290 4752 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933301 4752 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933311 4752 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933321 4752 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933335 4752 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933351 4752 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933364 4752 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933375 4752 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933389 4752 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933406 4752 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933417 4752 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933429 4752 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933440 4752 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933451 4752 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933462 4752 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933472 4752 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933483 4752 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933493 4752 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933504 4752 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933517 4752 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933531 4752 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933545 4752 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933555 4752 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933566 4752 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933578 4752 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933589 4752 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933599 4752 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933611 4752 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933621 4752 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933631 4752 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933641 4752 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933651 4752 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933662 4752 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933675 4752 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933685 4752 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933696 4752 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933706 4752 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933717 4752 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933727 4752 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933737 4752 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933748 4752 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933758 4752 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933771 4752 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933782 4752 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933792 4752 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933803 4752 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933814 4752 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933825 4752 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933835 4752 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933845 4752 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933891 4752 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933938 4752 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933950 4752 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933960 4752 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933971 4752 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933981 4752 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.933992 4752 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934002 4752 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.934021 4752 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934372 4752 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934393 4752 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934405 4752 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934417 4752 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934428 4752 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934438 4752 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934449 4752 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934460 4752 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934470 4752 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934481 4752 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934492 4752 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934502 4752 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934513 4752 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934527 4752 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934542 4752 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934553 4752 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934564 4752 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934575 4752 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934585 4752 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934598 4752 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934608 4752 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934619 4752 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934629 4752 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934639 4752 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934649 4752 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934659 4752 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934668 4752 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934678 4752 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934688 4752 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934698 4752 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934709 4752 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934719 4752 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934729 4752 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934739 4752 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934749 4752 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934760 4752 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934770 4752 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934785 4752 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934795 4752 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934806 4752 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934822 4752 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934836 4752 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934849 4752 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934899 4752 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934914 4752 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934926 4752 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934937 4752 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934948 4752 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934960 4752 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934970 4752 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934980 4752 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.934993 4752 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.935003 4752 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.935013 4752 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.935024 4752 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.935034 4752 feature_gate.go:330] unrecognized feature gate: Example Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.935045 4752 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.935055 4752 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.935066 4752 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.935076 4752 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.935087 4752 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.935101 4752 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.935115 4752 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.935127 4752 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.935139 4752 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.935150 4752 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.935162 4752 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.935172 4752 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.935183 4752 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.935194 4752 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 10:25:20 crc kubenswrapper[4752]: W0122 10:25:20.935204 4752 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.935223 4752 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.936086 4752 server.go:940] "Client rotation is on, will bootstrap in background" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.942792 4752 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.943014 4752 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.943967 4752 server.go:997] "Starting client certificate rotation" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.944021 4752 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.944505 4752 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-07 13:08:42.824952053 +0000 UTC Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.944603 4752 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.951938 4752 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 10:25:20 crc kubenswrapper[4752]: E0122 10:25:20.953312 4752 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.67:6443: connect: connection refused" logger="UnhandledError" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.954808 4752 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.962641 4752 log.go:25] "Validated CRI v1 runtime API" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.981271 4752 log.go:25] "Validated CRI v1 image API" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.983066 4752 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.986536 4752 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-22-10-18-32-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 22 10:25:20 crc kubenswrapper[4752]: I0122 10:25:20.986600 4752 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.016642 4752 manager.go:217] Machine: {Timestamp:2026-01-22 10:25:21.014364921 +0000 UTC m=+0.244307889 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:d71a021a-a6a8-4801-b0d5-dbfd44512a09 BootID:1f6520db-2739-481f-9d91-77c81039e25e Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:26:f3:1a Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:26:f3:1a Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:2a:fa:0c Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:86:79:1c Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:89:82:0f Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:92:f3:fc Speed:-1 Mtu:1496} {Name:eth10 MacAddress:2e:d7:c0:da:b5:24 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:2a:f0:a5:a3:b7:41 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.017146 4752 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.017357 4752 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.018204 4752 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.018548 4752 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.018598 4752 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.019025 4752 topology_manager.go:138] "Creating topology manager with none policy" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.019048 4752 container_manager_linux.go:303] "Creating device plugin manager" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.019339 4752 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.019389 4752 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.019620 4752 state_mem.go:36] "Initialized new in-memory state store" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.019925 4752 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.020831 4752 kubelet.go:418] "Attempting to sync node with API server" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.020939 4752 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.020985 4752 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.021020 4752 kubelet.go:324] "Adding apiserver pod source" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.021039 4752 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 22 10:25:21 crc kubenswrapper[4752]: W0122 10:25:21.022808 4752 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.67:6443: connect: connection refused Jan 22 10:25:21 crc kubenswrapper[4752]: E0122 10:25:21.022969 4752 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.67:6443: connect: connection refused" logger="UnhandledError" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.023253 4752 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 22 10:25:21 crc kubenswrapper[4752]: W0122 10:25:21.023296 4752 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.67:6443: connect: connection refused Jan 22 10:25:21 crc kubenswrapper[4752]: E0122 10:25:21.023413 4752 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.67:6443: connect: connection refused" logger="UnhandledError" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.023804 4752 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.024814 4752 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.025526 4752 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.025618 4752 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.025683 4752 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.025742 4752 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.025805 4752 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.025886 4752 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.025950 4752 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.026029 4752 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.026092 4752 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.026157 4752 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.026237 4752 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.026330 4752 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.026608 4752 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.027198 4752 server.go:1280] "Started kubelet" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.027912 4752 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.028083 4752 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.67:6443: connect: connection refused Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.028062 4752 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.028953 4752 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 22 10:25:21 crc systemd[1]: Started Kubernetes Kubelet. Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.032522 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.032673 4752 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.033550 4752 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.033586 4752 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.033758 4752 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 22 10:25:21 crc kubenswrapper[4752]: E0122 10:25:21.034090 4752 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.033235 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 20:02:24.936388945 +0000 UTC Jan 22 10:25:21 crc kubenswrapper[4752]: W0122 10:25:21.036632 4752 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.67:6443: connect: connection refused Jan 22 10:25:21 crc kubenswrapper[4752]: E0122 10:25:21.036728 4752 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.67:6443: connect: connection refused" logger="UnhandledError" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.036976 4752 server.go:460] "Adding debug handlers to kubelet server" Jan 22 10:25:21 crc kubenswrapper[4752]: E0122 10:25:21.037032 4752 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.67:6443: connect: connection refused" interval="200ms" Jan 22 10:25:21 crc kubenswrapper[4752]: E0122 10:25:21.031610 4752 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.67:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d06a1e4f3e354 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 10:25:21.027162964 +0000 UTC m=+0.257105872,LastTimestamp:2026-01-22 10:25:21.027162964 +0000 UTC m=+0.257105872,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.038171 4752 factory.go:55] Registering systemd factory Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.038197 4752 factory.go:221] Registration of the systemd container factory successfully Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.038494 4752 factory.go:153] Registering CRI-O factory Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.038512 4752 factory.go:221] Registration of the crio container factory successfully Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.038580 4752 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.038608 4752 factory.go:103] Registering Raw factory Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.038633 4752 manager.go:1196] Started watching for new ooms in manager Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.043097 4752 manager.go:319] Starting recovery of all containers Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.051163 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.051274 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.051310 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.051331 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.051350 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.051369 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.051388 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.051408 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.051428 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.051447 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.051465 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.051483 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.051501 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.051522 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.051540 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.051558 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.051586 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.051615 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.051636 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.051657 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.052666 4752 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.052733 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.052759 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.052782 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.052804 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.052823 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.052847 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.052905 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.052931 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.052954 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.052975 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.052993 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053012 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053032 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053051 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053072 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053091 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053110 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053129 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053146 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053167 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053186 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053205 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053224 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053243 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053261 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053282 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053300 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053319 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053338 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053355 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053374 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053391 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053417 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053436 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053456 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053478 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053498 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053520 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053537 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053555 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053632 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053652 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053670 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053691 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053710 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053728 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053747 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053765 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053782 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053800 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053817 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053837 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053886 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053907 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053926 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053943 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053959 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053977 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.053994 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.054214 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.054238 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.054265 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.054289 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.054318 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.054337 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.054356 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.054381 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.054408 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.054472 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.054501 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.054525 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.054549 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.054572 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.054596 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.054621 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.054650 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.054674 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.054701 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.054726 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.054753 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.054777 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.054834 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.054854 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.054943 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.054973 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.054995 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055015 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055034 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055053 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055073 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055121 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055143 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055162 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055181 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055201 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055218 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055236 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055255 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055274 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055297 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055320 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055345 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055370 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055396 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055418 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055438 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055458 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055476 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055494 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055513 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055530 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055550 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055568 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055587 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055605 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055624 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055642 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055659 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055676 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055694 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055713 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055731 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055749 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055768 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055831 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055895 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055923 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055953 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.055981 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056005 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056024 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056043 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056061 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056080 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056097 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056116 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056133 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056207 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056224 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056246 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056265 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056283 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056301 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056318 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056335 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056354 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056373 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056390 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056408 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056429 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056446 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056465 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056483 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056503 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056521 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056539 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056558 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056575 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056596 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056613 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056631 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056648 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056665 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056682 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056701 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056720 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056737 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056753 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056770 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056789 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056805 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056822 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056840 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056892 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056911 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056930 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056946 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056964 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056982 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.056999 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.057025 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.057043 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.057062 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.057079 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.057095 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.057112 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.057128 4752 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.057147 4752 reconstruct.go:97] "Volume reconstruction finished" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.057160 4752 reconciler.go:26] "Reconciler: start to sync state" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.069697 4752 manager.go:324] Recovery completed Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.089546 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.091945 4752 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.092122 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.092166 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.092179 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.093474 4752 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.093580 4752 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.093671 4752 state_mem.go:36] "Initialized new in-memory state store" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.096482 4752 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.096539 4752 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.096569 4752 kubelet.go:2335] "Starting kubelet main sync loop" Jan 22 10:25:21 crc kubenswrapper[4752]: E0122 10:25:21.096617 4752 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 22 10:25:21 crc kubenswrapper[4752]: W0122 10:25:21.097743 4752 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.67:6443: connect: connection refused Jan 22 10:25:21 crc kubenswrapper[4752]: E0122 10:25:21.097842 4752 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.67:6443: connect: connection refused" logger="UnhandledError" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.109099 4752 policy_none.go:49] "None policy: Start" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.110895 4752 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.110936 4752 state_mem.go:35] "Initializing new in-memory state store" Jan 22 10:25:21 crc kubenswrapper[4752]: E0122 10:25:21.134558 4752 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.164933 4752 manager.go:334] "Starting Device Plugin manager" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.165555 4752 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.165587 4752 server.go:79] "Starting device plugin registration server" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.166130 4752 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.166153 4752 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.166335 4752 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.166452 4752 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.166466 4752 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 22 10:25:21 crc kubenswrapper[4752]: E0122 10:25:21.180403 4752 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.197642 4752 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.197769 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.198882 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.198910 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.198923 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.199041 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.199342 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.199407 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.199746 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.199795 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.199811 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.200064 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.200288 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.200342 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.200421 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.200468 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.200485 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.201120 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.201157 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.201171 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.201295 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.201434 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.201471 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.202038 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.202082 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.202099 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.202116 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.202142 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.202158 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.202179 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.202199 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.202210 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.202328 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.202505 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.202540 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.203391 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.203420 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.203443 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.203456 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.203424 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.203555 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.203610 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.203636 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.204372 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.204412 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.204429 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:21 crc kubenswrapper[4752]: E0122 10:25:21.238179 4752 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.67:6443: connect: connection refused" interval="400ms" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.260040 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.260078 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.260100 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.260119 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.260139 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.260159 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.260178 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.260196 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.260255 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.260318 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.260369 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.260414 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.260446 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.260481 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.260513 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.266409 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.267609 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.267650 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.267662 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.267689 4752 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 10:25:21 crc kubenswrapper[4752]: E0122 10:25:21.268157 4752 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.67:6443: connect: connection refused" node="crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.362299 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.362393 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.362443 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.362491 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.362540 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.362569 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.362614 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.362657 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.362668 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.362731 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.362738 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.362590 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.362831 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.362911 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.362954 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.362996 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.363038 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.363080 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.363124 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.363165 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.363206 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.363787 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.363879 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.363847 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.363931 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.363977 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.363983 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.364062 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.364032 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.364034 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.469218 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.471072 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.471136 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.471155 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.471195 4752 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 10:25:21 crc kubenswrapper[4752]: E0122 10:25:21.471736 4752 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.67:6443: connect: connection refused" node="crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.523263 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.531496 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.548043 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: W0122 10:25:21.553223 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-ea2a2f943f44fa55cab192ce36251564a118a635855c83f330ae804a4b705641 WatchSource:0}: Error finding container ea2a2f943f44fa55cab192ce36251564a118a635855c83f330ae804a4b705641: Status 404 returned error can't find the container with id ea2a2f943f44fa55cab192ce36251564a118a635855c83f330ae804a4b705641 Jan 22 10:25:21 crc kubenswrapper[4752]: W0122 10:25:21.554134 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-3e95443a1476accf1111e4bdfb5c834fb2860a28df77e82ea7826998ae396190 WatchSource:0}: Error finding container 3e95443a1476accf1111e4bdfb5c834fb2860a28df77e82ea7826998ae396190: Status 404 returned error can't find the container with id 3e95443a1476accf1111e4bdfb5c834fb2860a28df77e82ea7826998ae396190 Jan 22 10:25:21 crc kubenswrapper[4752]: W0122 10:25:21.568899 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-bbb5cc2e763bd58c7164723734cd3c9c2db0f1311dc8e7c5a70ff7a921e4af64 WatchSource:0}: Error finding container bbb5cc2e763bd58c7164723734cd3c9c2db0f1311dc8e7c5a70ff7a921e4af64: Status 404 returned error can't find the container with id bbb5cc2e763bd58c7164723734cd3c9c2db0f1311dc8e7c5a70ff7a921e4af64 Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.569010 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.578742 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 10:25:21 crc kubenswrapper[4752]: W0122 10:25:21.595354 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-c0adff00d235f49db0d50097c9fe1e80852f6d1da5cbb7a3ec94d861b219b175 WatchSource:0}: Error finding container c0adff00d235f49db0d50097c9fe1e80852f6d1da5cbb7a3ec94d861b219b175: Status 404 returned error can't find the container with id c0adff00d235f49db0d50097c9fe1e80852f6d1da5cbb7a3ec94d861b219b175 Jan 22 10:25:21 crc kubenswrapper[4752]: W0122 10:25:21.605241 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-5cd6e10cab2d3c03ebdceaf843454760680e53fa85fde672ea6e58e6c43ed7b9 WatchSource:0}: Error finding container 5cd6e10cab2d3c03ebdceaf843454760680e53fa85fde672ea6e58e6c43ed7b9: Status 404 returned error can't find the container with id 5cd6e10cab2d3c03ebdceaf843454760680e53fa85fde672ea6e58e6c43ed7b9 Jan 22 10:25:21 crc kubenswrapper[4752]: E0122 10:25:21.639333 4752 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.67:6443: connect: connection refused" interval="800ms" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.872223 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.873505 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.873541 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.873553 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:21 crc kubenswrapper[4752]: I0122 10:25:21.873582 4752 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 10:25:21 crc kubenswrapper[4752]: E0122 10:25:21.873964 4752 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.67:6443: connect: connection refused" node="crc" Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.029191 4752 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.67:6443: connect: connection refused Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.036563 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 21:48:59.173614048 +0000 UTC Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.102355 4752 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="a1a0f8deef98dbdcc1c90cc6be06afc57e83bd9ce88d0575c90ff328a0de3a10" exitCode=0 Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.102433 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"a1a0f8deef98dbdcc1c90cc6be06afc57e83bd9ce88d0575c90ff328a0de3a10"} Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.102524 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5cd6e10cab2d3c03ebdceaf843454760680e53fa85fde672ea6e58e6c43ed7b9"} Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.102661 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.103820 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.103902 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.103920 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.104588 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2"} Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.104614 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c0adff00d235f49db0d50097c9fe1e80852f6d1da5cbb7a3ec94d861b219b175"} Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.107150 4752 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c" exitCode=0 Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.107204 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c"} Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.107221 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bbb5cc2e763bd58c7164723734cd3c9c2db0f1311dc8e7c5a70ff7a921e4af64"} Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.107297 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.110152 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.110190 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.110203 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.112166 4752 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac" exitCode=0 Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.112220 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac"} Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.112235 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ea2a2f943f44fa55cab192ce36251564a118a635855c83f330ae804a4b705641"} Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.112306 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.113133 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.113157 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.113167 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.114039 4752 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="be94453168f044314f18b8b4c002faaa7e4cd450d310086d2732ffe31e2c00f4" exitCode=0 Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.114071 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"be94453168f044314f18b8b4c002faaa7e4cd450d310086d2732ffe31e2c00f4"} Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.114087 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"3e95443a1476accf1111e4bdfb5c834fb2860a28df77e82ea7826998ae396190"} Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.114135 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.114623 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.114670 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.114680 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.115753 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.116835 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.116874 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.116900 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:22 crc kubenswrapper[4752]: W0122 10:25:22.177022 4752 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.67:6443: connect: connection refused Jan 22 10:25:22 crc kubenswrapper[4752]: E0122 10:25:22.177166 4752 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.67:6443: connect: connection refused" logger="UnhandledError" Jan 22 10:25:22 crc kubenswrapper[4752]: W0122 10:25:22.420398 4752 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.67:6443: connect: connection refused Jan 22 10:25:22 crc kubenswrapper[4752]: E0122 10:25:22.420510 4752 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.67:6443: connect: connection refused" logger="UnhandledError" Jan 22 10:25:22 crc kubenswrapper[4752]: E0122 10:25:22.440108 4752 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.67:6443: connect: connection refused" interval="1.6s" Jan 22 10:25:22 crc kubenswrapper[4752]: W0122 10:25:22.574725 4752 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.67:6443: connect: connection refused Jan 22 10:25:22 crc kubenswrapper[4752]: E0122 10:25:22.574807 4752 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.67:6443: connect: connection refused" logger="UnhandledError" Jan 22 10:25:22 crc kubenswrapper[4752]: W0122 10:25:22.576202 4752 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.67:6443: connect: connection refused Jan 22 10:25:22 crc kubenswrapper[4752]: E0122 10:25:22.576250 4752 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.67:6443: connect: connection refused" logger="UnhandledError" Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.675002 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.679012 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.679050 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.679061 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:22 crc kubenswrapper[4752]: I0122 10:25:22.679086 4752 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 10:25:22 crc kubenswrapper[4752]: E0122 10:25:22.679481 4752 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.67:6443: connect: connection refused" node="crc" Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.029312 4752 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.67:6443: connect: connection refused Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.037233 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 08:51:08.853952473 +0000 UTC Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.063472 4752 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 22 10:25:23 crc kubenswrapper[4752]: E0122 10:25:23.064465 4752 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.67:6443: connect: connection refused" logger="UnhandledError" Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.118975 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d47441904b6b8d8f817e2d47598f828c18f1fd8d104c4e5c87e82d242794f570"} Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.119018 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"191c5edbba8a6c821307be508d3dd5506f5866426d9c71935d329c1f50500c65"} Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.119030 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"60937db9f170129131f4a1a57506ef4a7531fa730e907c9e9e0fa47365c89441"} Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.119109 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.119777 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.119796 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.119805 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.121518 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8"} Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.121539 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4"} Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.121549 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526"} Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.121600 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.122329 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.122357 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.122366 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.124076 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a"} Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.124124 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49"} Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.124134 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea"} Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.124143 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c"} Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.124151 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864"} Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.124241 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.124966 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.124996 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.125007 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.126069 4752 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1" exitCode=0 Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.126122 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1"} Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.126222 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.126895 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.126917 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.126927 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.128000 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"1d8bd10543aac345bf6c137afa65282b114e588c5962c6a391d1e4feb42fc507"} Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.128076 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.128668 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.128703 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:23 crc kubenswrapper[4752]: I0122 10:25:23.128715 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:24 crc kubenswrapper[4752]: I0122 10:25:24.037669 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 06:40:11.082157987 +0000 UTC Jan 22 10:25:24 crc kubenswrapper[4752]: I0122 10:25:24.134786 4752 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049" exitCode=0 Jan 22 10:25:24 crc kubenswrapper[4752]: I0122 10:25:24.134892 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049"} Jan 22 10:25:24 crc kubenswrapper[4752]: I0122 10:25:24.135015 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:24 crc kubenswrapper[4752]: I0122 10:25:24.135106 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:24 crc kubenswrapper[4752]: I0122 10:25:24.136632 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:24 crc kubenswrapper[4752]: I0122 10:25:24.136700 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:24 crc kubenswrapper[4752]: I0122 10:25:24.136718 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:24 crc kubenswrapper[4752]: I0122 10:25:24.136635 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:24 crc kubenswrapper[4752]: I0122 10:25:24.136764 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:24 crc kubenswrapper[4752]: I0122 10:25:24.136781 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:24 crc kubenswrapper[4752]: I0122 10:25:24.279623 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:24 crc kubenswrapper[4752]: I0122 10:25:24.280775 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:24 crc kubenswrapper[4752]: I0122 10:25:24.280828 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:24 crc kubenswrapper[4752]: I0122 10:25:24.280846 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:24 crc kubenswrapper[4752]: I0122 10:25:24.280897 4752 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 10:25:25 crc kubenswrapper[4752]: I0122 10:25:25.037944 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 10:32:10.335557921 +0000 UTC Jan 22 10:25:25 crc kubenswrapper[4752]: I0122 10:25:25.140764 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f"} Jan 22 10:25:25 crc kubenswrapper[4752]: I0122 10:25:25.140827 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc"} Jan 22 10:25:25 crc kubenswrapper[4752]: I0122 10:25:25.332925 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:25:25 crc kubenswrapper[4752]: I0122 10:25:25.333116 4752 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 10:25:25 crc kubenswrapper[4752]: I0122 10:25:25.333177 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:25 crc kubenswrapper[4752]: I0122 10:25:25.334943 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:25 crc kubenswrapper[4752]: I0122 10:25:25.335010 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:25 crc kubenswrapper[4752]: I0122 10:25:25.335029 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:25 crc kubenswrapper[4752]: I0122 10:25:25.372888 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:25:26 crc kubenswrapper[4752]: I0122 10:25:26.038727 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 01:17:14.443179445 +0000 UTC Jan 22 10:25:26 crc kubenswrapper[4752]: I0122 10:25:26.150457 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6"} Jan 22 10:25:26 crc kubenswrapper[4752]: I0122 10:25:26.150493 4752 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 10:25:26 crc kubenswrapper[4752]: I0122 10:25:26.150517 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c"} Jan 22 10:25:26 crc kubenswrapper[4752]: I0122 10:25:26.150540 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0"} Jan 22 10:25:26 crc kubenswrapper[4752]: I0122 10:25:26.150558 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:26 crc kubenswrapper[4752]: I0122 10:25:26.150558 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:26 crc kubenswrapper[4752]: I0122 10:25:26.152075 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:26 crc kubenswrapper[4752]: I0122 10:25:26.152128 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:26 crc kubenswrapper[4752]: I0122 10:25:26.152148 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:26 crc kubenswrapper[4752]: I0122 10:25:26.152533 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:26 crc kubenswrapper[4752]: I0122 10:25:26.152589 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:26 crc kubenswrapper[4752]: I0122 10:25:26.152606 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:27 crc kubenswrapper[4752]: I0122 10:25:27.039777 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 20:03:36.924508534 +0000 UTC Jan 22 10:25:27 crc kubenswrapper[4752]: I0122 10:25:27.153993 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:27 crc kubenswrapper[4752]: I0122 10:25:27.155202 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:27 crc kubenswrapper[4752]: I0122 10:25:27.155246 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:27 crc kubenswrapper[4752]: I0122 10:25:27.155264 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:27 crc kubenswrapper[4752]: I0122 10:25:27.261479 4752 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 22 10:25:27 crc kubenswrapper[4752]: I0122 10:25:27.846747 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:25:27 crc kubenswrapper[4752]: I0122 10:25:27.847015 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:27 crc kubenswrapper[4752]: I0122 10:25:27.848399 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:27 crc kubenswrapper[4752]: I0122 10:25:27.848429 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:27 crc kubenswrapper[4752]: I0122 10:25:27.848439 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:28 crc kubenswrapper[4752]: I0122 10:25:28.040452 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 01:42:28.000078416 +0000 UTC Jan 22 10:25:28 crc kubenswrapper[4752]: I0122 10:25:28.450627 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 22 10:25:28 crc kubenswrapper[4752]: I0122 10:25:28.450919 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:28 crc kubenswrapper[4752]: I0122 10:25:28.466993 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:28 crc kubenswrapper[4752]: I0122 10:25:28.467061 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:28 crc kubenswrapper[4752]: I0122 10:25:28.467075 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:29 crc kubenswrapper[4752]: I0122 10:25:29.041577 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 19:44:04.45723895 +0000 UTC Jan 22 10:25:29 crc kubenswrapper[4752]: I0122 10:25:29.081175 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:25:29 crc kubenswrapper[4752]: I0122 10:25:29.081435 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:29 crc kubenswrapper[4752]: I0122 10:25:29.082779 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:29 crc kubenswrapper[4752]: I0122 10:25:29.082821 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:29 crc kubenswrapper[4752]: I0122 10:25:29.082838 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:29 crc kubenswrapper[4752]: I0122 10:25:29.978100 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:25:29 crc kubenswrapper[4752]: I0122 10:25:29.978717 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:29 crc kubenswrapper[4752]: I0122 10:25:29.980802 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:29 crc kubenswrapper[4752]: I0122 10:25:29.980991 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:29 crc kubenswrapper[4752]: I0122 10:25:29.981030 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:30 crc kubenswrapper[4752]: I0122 10:25:30.042499 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 09:09:30.816611952 +0000 UTC Jan 22 10:25:30 crc kubenswrapper[4752]: I0122 10:25:30.046701 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 22 10:25:30 crc kubenswrapper[4752]: I0122 10:25:30.046971 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:30 crc kubenswrapper[4752]: I0122 10:25:30.048706 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:30 crc kubenswrapper[4752]: I0122 10:25:30.048757 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:30 crc kubenswrapper[4752]: I0122 10:25:30.048773 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:30 crc kubenswrapper[4752]: I0122 10:25:30.846778 4752 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 10:25:30 crc kubenswrapper[4752]: I0122 10:25:30.846916 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 10:25:30 crc kubenswrapper[4752]: I0122 10:25:30.972352 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 10:25:30 crc kubenswrapper[4752]: I0122 10:25:30.972513 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:30 crc kubenswrapper[4752]: I0122 10:25:30.973709 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:30 crc kubenswrapper[4752]: I0122 10:25:30.973746 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:30 crc kubenswrapper[4752]: I0122 10:25:30.973763 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:31 crc kubenswrapper[4752]: I0122 10:25:31.043122 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 11:02:19.236525509 +0000 UTC Jan 22 10:25:31 crc kubenswrapper[4752]: I0122 10:25:31.159537 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:25:31 crc kubenswrapper[4752]: I0122 10:25:31.159847 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:31 crc kubenswrapper[4752]: I0122 10:25:31.161586 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:31 crc kubenswrapper[4752]: I0122 10:25:31.161629 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:31 crc kubenswrapper[4752]: I0122 10:25:31.161641 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:31 crc kubenswrapper[4752]: I0122 10:25:31.167589 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:25:31 crc kubenswrapper[4752]: I0122 10:25:31.169185 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:31 crc kubenswrapper[4752]: I0122 10:25:31.170809 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:31 crc kubenswrapper[4752]: I0122 10:25:31.170895 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:31 crc kubenswrapper[4752]: I0122 10:25:31.170926 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:31 crc kubenswrapper[4752]: I0122 10:25:31.180249 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:25:31 crc kubenswrapper[4752]: E0122 10:25:31.181151 4752 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 10:25:31 crc kubenswrapper[4752]: I0122 10:25:31.679137 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:25:32 crc kubenswrapper[4752]: I0122 10:25:32.044059 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 22:53:36.588765019 +0000 UTC Jan 22 10:25:32 crc kubenswrapper[4752]: I0122 10:25:32.172312 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:32 crc kubenswrapper[4752]: I0122 10:25:32.173491 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:32 crc kubenswrapper[4752]: I0122 10:25:32.173557 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:32 crc kubenswrapper[4752]: I0122 10:25:32.173582 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:33 crc kubenswrapper[4752]: I0122 10:25:33.044504 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 03:30:48.02715993 +0000 UTC Jan 22 10:25:33 crc kubenswrapper[4752]: I0122 10:25:33.174163 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:33 crc kubenswrapper[4752]: I0122 10:25:33.175060 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:33 crc kubenswrapper[4752]: I0122 10:25:33.175143 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:33 crc kubenswrapper[4752]: I0122 10:25:33.175201 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:34 crc kubenswrapper[4752]: I0122 10:25:34.037069 4752 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 22 10:25:34 crc kubenswrapper[4752]: I0122 10:25:34.037326 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 22 10:25:34 crc kubenswrapper[4752]: I0122 10:25:34.044975 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 19:20:32.984034805 +0000 UTC Jan 22 10:25:34 crc kubenswrapper[4752]: I0122 10:25:34.046660 4752 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 22 10:25:34 crc kubenswrapper[4752]: I0122 10:25:34.046760 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 22 10:25:35 crc kubenswrapper[4752]: I0122 10:25:35.046453 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 02:44:23.090115041 +0000 UTC Jan 22 10:25:35 crc kubenswrapper[4752]: I0122 10:25:35.380942 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:25:35 crc kubenswrapper[4752]: I0122 10:25:35.381081 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:35 crc kubenswrapper[4752]: I0122 10:25:35.382389 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:35 crc kubenswrapper[4752]: I0122 10:25:35.382436 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:35 crc kubenswrapper[4752]: I0122 10:25:35.382448 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:35 crc kubenswrapper[4752]: I0122 10:25:35.385921 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:25:36 crc kubenswrapper[4752]: I0122 10:25:36.047085 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 17:45:43.286825029 +0000 UTC Jan 22 10:25:36 crc kubenswrapper[4752]: I0122 10:25:36.180320 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:36 crc kubenswrapper[4752]: I0122 10:25:36.181678 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:36 crc kubenswrapper[4752]: I0122 10:25:36.181731 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:36 crc kubenswrapper[4752]: I0122 10:25:36.181748 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:37 crc kubenswrapper[4752]: I0122 10:25:37.047885 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 00:00:02.601457756 +0000 UTC Jan 22 10:25:38 crc kubenswrapper[4752]: I0122 10:25:38.048354 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 16:13:42.508082128 +0000 UTC Jan 22 10:25:38 crc kubenswrapper[4752]: I0122 10:25:38.489034 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 22 10:25:38 crc kubenswrapper[4752]: I0122 10:25:38.489360 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:38 crc kubenswrapper[4752]: I0122 10:25:38.491104 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:38 crc kubenswrapper[4752]: I0122 10:25:38.491165 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:38 crc kubenswrapper[4752]: I0122 10:25:38.491184 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:38 crc kubenswrapper[4752]: I0122 10:25:38.509998 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 22 10:25:39 crc kubenswrapper[4752]: E0122 10:25:39.033011 4752 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.036226 4752 trace.go:236] Trace[1212902923]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 10:25:24.598) (total time: 14437ms): Jan 22 10:25:39 crc kubenswrapper[4752]: Trace[1212902923]: ---"Objects listed" error: 14437ms (10:25:39.036) Jan 22 10:25:39 crc kubenswrapper[4752]: Trace[1212902923]: [14.437366403s] [14.437366403s] END Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.036260 4752 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.036532 4752 trace.go:236] Trace[1717044771]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 10:25:24.711) (total time: 14324ms): Jan 22 10:25:39 crc kubenswrapper[4752]: Trace[1717044771]: ---"Objects listed" error: 14324ms (10:25:39.036) Jan 22 10:25:39 crc kubenswrapper[4752]: Trace[1717044771]: [14.324502189s] [14.324502189s] END Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.036561 4752 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.038273 4752 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.038315 4752 trace.go:236] Trace[1907689090]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 10:25:25.607) (total time: 13430ms): Jan 22 10:25:39 crc kubenswrapper[4752]: Trace[1907689090]: ---"Objects listed" error: 13430ms (10:25:39.038) Jan 22 10:25:39 crc kubenswrapper[4752]: Trace[1907689090]: [13.430283725s] [13.430283725s] END Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.038340 4752 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 22 10:25:39 crc kubenswrapper[4752]: E0122 10:25:39.039030 4752 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.039611 4752 trace.go:236] Trace[1586527319]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 10:25:25.159) (total time: 13879ms): Jan 22 10:25:39 crc kubenswrapper[4752]: Trace[1586527319]: ---"Objects listed" error: 13879ms (10:25:39.039) Jan 22 10:25:39 crc kubenswrapper[4752]: Trace[1586527319]: [13.879649646s] [13.879649646s] END Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.039630 4752 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.049010 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 14:16:56.571168821 +0000 UTC Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.053422 4752 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.055116 4752 apiserver.go:52] "Watching apiserver" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.057796 4752 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.058073 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.058425 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.058482 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.058549 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.058557 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:25:39 crc kubenswrapper[4752]: E0122 10:25:39.058612 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.058723 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 10:25:39 crc kubenswrapper[4752]: E0122 10:25:39.058934 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.059102 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:25:39 crc kubenswrapper[4752]: E0122 10:25:39.059140 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.060623 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.062826 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.063539 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.063552 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.064271 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.065196 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.065252 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.066443 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.085997 4752 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:34372->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.086118 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:34372->192.168.126.11:17697: read: connection reset by peer" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.086422 4752 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:34370->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.086473 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:34370->192.168.126.11:17697: read: connection reset by peer" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.086565 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.086902 4752 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.086941 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.087331 4752 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.087420 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.126219 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.137331 4752 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.138140 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.138887 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.138951 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.138991 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.139029 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.139064 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.139101 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.139136 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.139175 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.139216 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.139256 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.139292 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.139326 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.139345 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.139364 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.139409 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.139445 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.139477 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.139512 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.139560 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.139602 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.139641 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.139681 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.139742 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.139788 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.139826 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.139897 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.139936 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.139344 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.139402 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.139610 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.139690 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.139762 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.139825 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.139954 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.139973 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.140097 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.140139 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.140179 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.140215 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.140252 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.140282 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.140312 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.140342 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.140371 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.140405 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.140570 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.140582 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.140606 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.140659 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.140783 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.140883 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.140895 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.141085 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.141190 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.141263 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.141287 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.141470 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.141746 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.141812 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.141900 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.141994 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.142064 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.142236 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.142280 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.142312 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.142347 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.142353 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.142374 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.142406 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.142430 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.142438 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.142489 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.142510 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.142633 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.142699 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.142786 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.142808 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.142827 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.142847 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.142879 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.142898 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.142924 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.142947 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.142971 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.142993 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143002 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143013 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143029 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143045 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143061 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143080 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143096 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143113 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143129 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143146 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143144 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143163 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143209 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143243 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143278 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143290 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143410 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143421 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143155 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143451 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143519 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143563 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143598 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143635 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143674 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143716 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143727 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143746 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143777 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143796 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143842 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143901 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143903 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143974 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.143999 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.144024 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.144045 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.144066 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.144088 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.144109 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.144128 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.144150 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.144121 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.144172 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.144194 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.144220 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.144243 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.144269 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.144291 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.144314 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.144336 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.144348 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.144358 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.144460 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.144486 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.144542 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.144569 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.144610 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.144659 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.144696 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.144640 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.144731 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.144767 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.144805 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.144839 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.144903 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.144942 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.144974 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.145010 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.145036 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.145086 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.145114 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.145140 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.145165 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.145190 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.145219 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.145256 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.145282 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.145308 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.145323 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.145333 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.145373 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.145396 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.145501 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.145645 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.145915 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.145936 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.146024 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.146200 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.146233 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.146270 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.146417 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.146497 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.146595 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.146630 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.146650 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.146673 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.146692 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.146713 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.146731 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.146749 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.146767 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.147058 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.147123 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.147272 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.147283 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.147573 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.147598 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.147614 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.147616 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.147642 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.147660 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.147678 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.147693 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.147707 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.147721 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.147786 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.147824 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.148002 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.148038 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.148067 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.148094 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.148125 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.148157 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.148256 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.148284 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.148312 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.148340 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.148368 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.148395 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.148424 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.148468 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.148498 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.148525 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.148549 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.148573 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.148599 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.148622 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.150355 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.150498 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.150597 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.150698 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.150797 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.150921 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.151064 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.151194 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.151348 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.151449 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.151550 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.151641 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.151752 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.151894 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.152041 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.152161 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.152262 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.147715 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.154006 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.147780 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.147940 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.148094 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.148296 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.148321 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.148805 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.145412 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.149299 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.149526 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.149967 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.150146 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.150315 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.150339 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.150366 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.150427 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.150814 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.150779 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.150885 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.154687 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.154693 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.151057 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.151127 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.151134 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.151505 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.154812 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.151550 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.151930 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.152110 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.152275 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.152291 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.152573 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.153027 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.152847 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.153047 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: E0122 10:25:39.153263 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:25:39.653227318 +0000 UTC m=+18.883170276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.153360 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.153803 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.153869 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.153954 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.154402 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.154948 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.154965 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.154998 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.155020 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.155038 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.155066 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.155176 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.155232 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.155252 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.155279 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.155529 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.156016 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.156046 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.156069 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.156087 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.156107 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.156125 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.156143 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.150923 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.156163 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.156183 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.156202 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.156246 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.156267 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.156311 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.156335 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.156379 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.156401 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.156423 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.156490 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.156514 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.156542 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.156567 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.156587 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.156604 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.156755 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.156795 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.156843 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.157209 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.157269 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.157325 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.157365 4752 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.157377 4752 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.157387 4752 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.157397 4752 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.157408 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.157419 4752 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.157429 4752 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.157440 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.157450 4752 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.157459 4752 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.157452 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.157469 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.157562 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.157503 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.157708 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.157744 4752 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.157769 4752 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.157791 4752 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158034 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.157711 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.157965 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158016 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158114 4752 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158144 4752 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158157 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158172 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158302 4752 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158309 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158316 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158328 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158342 4752 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158376 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158384 4752 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158421 4752 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158436 4752 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158450 4752 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158465 4752 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158582 4752 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158597 4752 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158612 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158244 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158633 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158630 4752 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158664 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158694 4752 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158714 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158728 4752 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158741 4752 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158755 4752 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158768 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158783 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158795 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158807 4752 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158819 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158830 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158842 4752 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158898 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158911 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158923 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158935 4752 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158928 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.158946 4752 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.159096 4752 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.159709 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.159750 4752 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.159771 4752 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.159791 4752 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.159812 4752 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.159832 4752 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.159850 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.159900 4752 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.159920 4752 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.159943 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.159963 4752 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.159986 4752 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.160009 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.160028 4752 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.160048 4752 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.160067 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.160085 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.160104 4752 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.160125 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.160154 4752 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.160143 4752 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.160239 4752 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.160259 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.160277 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.160295 4752 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.160313 4752 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.160331 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.160351 4752 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.160372 4752 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.160391 4752 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.160410 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.160428 4752 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.160447 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.160466 4752 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.160489 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.160510 4752 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.160527 4752 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.160545 4752 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.160563 4752 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.160581 4752 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.161241 4752 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.161315 4752 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.161337 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.161359 4752 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.161377 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.161398 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.161416 4752 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.161436 4752 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.161455 4752 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.161475 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.161494 4752 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.161512 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.162073 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.162237 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.162999 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.163235 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.163545 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.163606 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.163812 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.164794 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.165167 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.165475 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.165632 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: E0122 10:25:39.165843 4752 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 10:25:39 crc kubenswrapper[4752]: E0122 10:25:39.166016 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 10:25:39.665981728 +0000 UTC m=+18.895924646 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.166247 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.166595 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.166605 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.168173 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.168217 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.168493 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.168591 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.168603 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.168668 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.168826 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.168970 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.169238 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.169211 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.169416 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: E0122 10:25:39.170137 4752 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 10:25:39 crc kubenswrapper[4752]: E0122 10:25:39.170244 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 10:25:39.670213868 +0000 UTC m=+18.900156796 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.170132 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.170682 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.171330 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.171966 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.172251 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.173610 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.175325 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.175356 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.175302 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.175711 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.176045 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: E0122 10:25:39.176254 4752 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 10:25:39 crc kubenswrapper[4752]: E0122 10:25:39.176278 4752 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 10:25:39 crc kubenswrapper[4752]: E0122 10:25:39.176358 4752 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:25:39 crc kubenswrapper[4752]: E0122 10:25:39.176432 4752 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 10:25:39 crc kubenswrapper[4752]: E0122 10:25:39.176454 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 10:25:39.676431319 +0000 UTC m=+18.906374227 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:25:39 crc kubenswrapper[4752]: E0122 10:25:39.176460 4752 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 10:25:39 crc kubenswrapper[4752]: E0122 10:25:39.176490 4752 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:25:39 crc kubenswrapper[4752]: E0122 10:25:39.176586 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 10:25:39.676559152 +0000 UTC m=+18.906502070 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.176760 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.177032 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.179009 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.182211 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.184203 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.184609 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.184745 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.187221 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.187436 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.187598 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.187692 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.188194 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.188364 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.188539 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.188804 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.189038 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.189264 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.188667 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.189431 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.189427 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.189530 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.189882 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.189811 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.191338 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.192338 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.192511 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.192815 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.192859 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.194328 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.194850 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.194997 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.195430 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.195595 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.195563 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.195745 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.195984 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.196055 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.196405 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.198760 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.200003 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.200154 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.201261 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.202464 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.204829 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.208184 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.210300 4752 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a" exitCode=255 Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.210662 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a"} Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.219015 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.220051 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.220911 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.228843 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.233881 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.244692 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.256344 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.262173 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.262333 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.262515 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.262616 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.262708 4752 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.262579 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.262786 4752 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.262892 4752 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.262909 4752 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.262922 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.262936 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.262352 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.262951 4752 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.262995 4752 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263007 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263019 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263030 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263043 4752 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263064 4752 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263086 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263102 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263115 4752 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263129 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263140 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263153 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263166 4752 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263179 4752 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263192 4752 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263205 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263216 4752 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263226 4752 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263236 4752 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263246 4752 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263259 4752 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263270 4752 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263280 4752 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263291 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263302 4752 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263313 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263324 4752 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263334 4752 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263347 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263357 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263369 4752 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263379 4752 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263393 4752 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263406 4752 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263417 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263428 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263438 4752 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263449 4752 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263459 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263469 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263479 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263489 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263499 4752 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263511 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263524 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263537 4752 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263549 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263562 4752 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263574 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263587 4752 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263599 4752 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263609 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263632 4752 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263643 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263659 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263670 4752 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263679 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263689 4752 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263698 4752 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263709 4752 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263719 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263733 4752 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263746 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263758 4752 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263770 4752 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263782 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263792 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263803 4752 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263818 4752 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263832 4752 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263843 4752 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263873 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263886 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263898 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263911 4752 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263931 4752 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263943 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263955 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263968 4752 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263983 4752 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.263996 4752 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.264008 4752 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.264017 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.264031 4752 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.267911 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.282292 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.291192 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.309769 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.330554 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.353511 4752 scope.go:117] "RemoveContainer" containerID="7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.354805 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.356076 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.405331 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 10:25:39 crc kubenswrapper[4752]: W0122 10:25:39.418581 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-7da6abddedce99e885ca36cf32983d5f689c35f6026eb0e1beba53f18108ffc2 WatchSource:0}: Error finding container 7da6abddedce99e885ca36cf32983d5f689c35f6026eb0e1beba53f18108ffc2: Status 404 returned error can't find the container with id 7da6abddedce99e885ca36cf32983d5f689c35f6026eb0e1beba53f18108ffc2 Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.421944 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.428368 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 10:25:39 crc kubenswrapper[4752]: W0122 10:25:39.445804 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-03df5d8fbe9bd277e24f752db5ac2c8f9cae4bde4d2a99877427094c32202b38 WatchSource:0}: Error finding container 03df5d8fbe9bd277e24f752db5ac2c8f9cae4bde4d2a99877427094c32202b38: Status 404 returned error can't find the container with id 03df5d8fbe9bd277e24f752db5ac2c8f9cae4bde4d2a99877427094c32202b38 Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.666697 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:25:39 crc kubenswrapper[4752]: E0122 10:25:39.666887 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:25:40.666847963 +0000 UTC m=+19.896790871 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.667186 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:25:39 crc kubenswrapper[4752]: E0122 10:25:39.667320 4752 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 10:25:39 crc kubenswrapper[4752]: E0122 10:25:39.667381 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 10:25:40.667364006 +0000 UTC m=+19.897306994 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.768475 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.768540 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:25:39 crc kubenswrapper[4752]: I0122 10:25:39.768575 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:25:39 crc kubenswrapper[4752]: E0122 10:25:39.768693 4752 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 10:25:39 crc kubenswrapper[4752]: E0122 10:25:39.768784 4752 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 10:25:39 crc kubenswrapper[4752]: E0122 10:25:39.768806 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 10:25:40.768780073 +0000 UTC m=+19.998723041 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 10:25:39 crc kubenswrapper[4752]: E0122 10:25:39.768810 4752 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 10:25:39 crc kubenswrapper[4752]: E0122 10:25:39.768718 4752 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 10:25:39 crc kubenswrapper[4752]: E0122 10:25:39.768831 4752 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:25:39 crc kubenswrapper[4752]: E0122 10:25:39.768879 4752 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 10:25:39 crc kubenswrapper[4752]: E0122 10:25:39.768900 4752 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:25:39 crc kubenswrapper[4752]: E0122 10:25:39.768928 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 10:25:40.768905477 +0000 UTC m=+19.998848475 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:25:39 crc kubenswrapper[4752]: E0122 10:25:39.768956 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 10:25:40.768944578 +0000 UTC m=+19.998887606 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:25:40 crc kubenswrapper[4752]: I0122 10:25:40.049598 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 23:40:54.799839964 +0000 UTC Jan 22 10:25:40 crc kubenswrapper[4752]: I0122 10:25:40.215180 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f"} Jan 22 10:25:40 crc kubenswrapper[4752]: I0122 10:25:40.215243 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32"} Jan 22 10:25:40 crc kubenswrapper[4752]: I0122 10:25:40.215264 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"03df5d8fbe9bd277e24f752db5ac2c8f9cae4bde4d2a99877427094c32202b38"} Jan 22 10:25:40 crc kubenswrapper[4752]: I0122 10:25:40.217086 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"d08b03379af8606272f4d8949981d11c55a45d414478c36b6f384c255ea9b5e0"} Jan 22 10:25:40 crc kubenswrapper[4752]: I0122 10:25:40.218915 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8"} Jan 22 10:25:40 crc kubenswrapper[4752]: I0122 10:25:40.218971 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"7da6abddedce99e885ca36cf32983d5f689c35f6026eb0e1beba53f18108ffc2"} Jan 22 10:25:40 crc kubenswrapper[4752]: I0122 10:25:40.222469 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 22 10:25:40 crc kubenswrapper[4752]: I0122 10:25:40.226725 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c"} Jan 22 10:25:40 crc kubenswrapper[4752]: I0122 10:25:40.226780 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:25:40 crc kubenswrapper[4752]: I0122 10:25:40.240390 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:40 crc kubenswrapper[4752]: I0122 10:25:40.266806 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:40 crc kubenswrapper[4752]: I0122 10:25:40.283974 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:40 crc kubenswrapper[4752]: I0122 10:25:40.300106 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:40 crc kubenswrapper[4752]: I0122 10:25:40.317974 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:40 crc kubenswrapper[4752]: I0122 10:25:40.351295 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:40 crc kubenswrapper[4752]: I0122 10:25:40.367306 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:40 crc kubenswrapper[4752]: I0122 10:25:40.381955 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:40 crc kubenswrapper[4752]: I0122 10:25:40.396817 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:40 crc kubenswrapper[4752]: I0122 10:25:40.412242 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:40 crc kubenswrapper[4752]: I0122 10:25:40.427439 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:40 crc kubenswrapper[4752]: I0122 10:25:40.438920 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:40 crc kubenswrapper[4752]: I0122 10:25:40.459029 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:40 crc kubenswrapper[4752]: I0122 10:25:40.492648 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:40 crc kubenswrapper[4752]: I0122 10:25:40.513694 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:40 crc kubenswrapper[4752]: I0122 10:25:40.530616 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:40 crc kubenswrapper[4752]: I0122 10:25:40.555636 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:40 crc kubenswrapper[4752]: I0122 10:25:40.573956 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:40Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:40 crc kubenswrapper[4752]: I0122 10:25:40.679088 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:25:40 crc kubenswrapper[4752]: I0122 10:25:40.679196 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:25:40 crc kubenswrapper[4752]: E0122 10:25:40.679303 4752 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 10:25:40 crc kubenswrapper[4752]: E0122 10:25:40.679364 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 10:25:42.679346431 +0000 UTC m=+21.909289349 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 10:25:40 crc kubenswrapper[4752]: E0122 10:25:40.679497 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:25:42.679483605 +0000 UTC m=+21.909426513 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:25:40 crc kubenswrapper[4752]: I0122 10:25:40.780337 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:25:40 crc kubenswrapper[4752]: I0122 10:25:40.780433 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:25:40 crc kubenswrapper[4752]: I0122 10:25:40.780517 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:25:40 crc kubenswrapper[4752]: E0122 10:25:40.780642 4752 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 10:25:40 crc kubenswrapper[4752]: E0122 10:25:40.780734 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 10:25:42.780705307 +0000 UTC m=+22.010648265 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 10:25:40 crc kubenswrapper[4752]: E0122 10:25:40.780971 4752 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 10:25:40 crc kubenswrapper[4752]: E0122 10:25:40.781015 4752 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 10:25:40 crc kubenswrapper[4752]: E0122 10:25:40.781030 4752 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:25:40 crc kubenswrapper[4752]: E0122 10:25:40.781105 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 10:25:42.781077756 +0000 UTC m=+22.011020714 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:25:40 crc kubenswrapper[4752]: E0122 10:25:40.781222 4752 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 10:25:40 crc kubenswrapper[4752]: E0122 10:25:40.781316 4752 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 10:25:40 crc kubenswrapper[4752]: E0122 10:25:40.781389 4752 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:25:40 crc kubenswrapper[4752]: E0122 10:25:40.781514 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 10:25:42.781495817 +0000 UTC m=+22.011438725 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.050184 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 02:45:04.878019351 +0000 UTC Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.097256 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.097280 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:25:41 crc kubenswrapper[4752]: E0122 10:25:41.097396 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.097390 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:25:41 crc kubenswrapper[4752]: E0122 10:25:41.097603 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:25:41 crc kubenswrapper[4752]: E0122 10:25:41.097747 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.102001 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.102602 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.104184 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.104949 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.106208 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.106837 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.107519 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.108575 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.109386 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.110434 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.111099 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.112236 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.112791 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.113383 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.113614 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:41Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.114467 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.115043 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.116012 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.116426 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.117079 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.118232 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.118758 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.120021 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.120460 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.121494 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.122018 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.122733 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.124337 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.124954 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.125965 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.126481 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.127607 4752 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.127727 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.129574 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.129707 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:41Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.130592 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.131040 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.132564 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.133261 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.134178 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.134958 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.136072 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.136536 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.137665 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.138287 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.139310 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.139760 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.140671 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.141179 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.142455 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.142981 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.143820 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.144293 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.145236 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.145628 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:41Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.145824 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.146353 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.160299 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:41Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.177430 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:41Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.194161 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:41Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.210651 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:41Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.222693 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:41Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:41 crc kubenswrapper[4752]: I0122 10:25:41.244418 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:41Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.051074 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 03:20:11.018536165 +0000 UTC Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.242281 4752 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.251932 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.252044 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.252104 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.252293 4752 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.254050 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95"} Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.273665 4752 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.273895 4752 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.274987 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.275021 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.275034 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.275051 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.275063 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:42Z","lastTransitionTime":"2026-01-22T10:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.289647 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:42Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.313940 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:42Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:42 crc kubenswrapper[4752]: E0122 10:25:42.315050 4752 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6520db-2739-481f-9d91-77c81039e25e\\\",\\\"systemUUID\\\":\\\"d71a021a-a6a8-4801-b0d5-dbfd44512a09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:42Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.318819 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.318868 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.318877 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.318890 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.318900 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:42Z","lastTransitionTime":"2026-01-22T10:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:42 crc kubenswrapper[4752]: E0122 10:25:42.330814 4752 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6520db-2739-481f-9d91-77c81039e25e\\\",\\\"systemUUID\\\":\\\"d71a021a-a6a8-4801-b0d5-dbfd44512a09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:42Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.332520 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:42Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.335488 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.335529 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.335541 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.335562 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.335573 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:42Z","lastTransitionTime":"2026-01-22T10:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.346792 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:42Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:42 crc kubenswrapper[4752]: E0122 10:25:42.350663 4752 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6520db-2739-481f-9d91-77c81039e25e\\\",\\\"systemUUID\\\":\\\"d71a021a-a6a8-4801-b0d5-dbfd44512a09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:42Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.354480 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.354509 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.354517 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.354531 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.354541 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:42Z","lastTransitionTime":"2026-01-22T10:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.362601 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:42Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:42 crc kubenswrapper[4752]: E0122 10:25:42.367350 4752 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6520db-2739-481f-9d91-77c81039e25e\\\",\\\"systemUUID\\\":\\\"d71a021a-a6a8-4801-b0d5-dbfd44512a09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:42Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.372313 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.372379 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.372394 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.372413 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.372427 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:42Z","lastTransitionTime":"2026-01-22T10:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.377278 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:42Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:42 crc kubenswrapper[4752]: E0122 10:25:42.386229 4752 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6520db-2739-481f-9d91-77c81039e25e\\\",\\\"systemUUID\\\":\\\"d71a021a-a6a8-4801-b0d5-dbfd44512a09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:42Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:42 crc kubenswrapper[4752]: E0122 10:25:42.386355 4752 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.388409 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.388446 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.388458 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.388476 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.388488 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:42Z","lastTransitionTime":"2026-01-22T10:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.407282 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:42Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.423593 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:42Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.435992 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:42Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.491769 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.491806 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.491815 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.491829 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.491838 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:42Z","lastTransitionTime":"2026-01-22T10:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.595492 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.595571 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.595593 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.595617 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.595636 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:42Z","lastTransitionTime":"2026-01-22T10:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.699753 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.699960 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.699986 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.700065 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.700129 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:42Z","lastTransitionTime":"2026-01-22T10:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.700594 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.700843 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:25:42 crc kubenswrapper[4752]: E0122 10:25:42.701161 4752 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 10:25:42 crc kubenswrapper[4752]: E0122 10:25:42.701359 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:25:46.701065081 +0000 UTC m=+25.931008029 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:25:42 crc kubenswrapper[4752]: E0122 10:25:42.701696 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 10:25:46.701493933 +0000 UTC m=+25.931437011 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.802464 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.802523 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.802566 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:25:42 crc kubenswrapper[4752]: E0122 10:25:42.802667 4752 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 10:25:42 crc kubenswrapper[4752]: E0122 10:25:42.802744 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 10:25:46.802722625 +0000 UTC m=+26.032665543 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 10:25:42 crc kubenswrapper[4752]: E0122 10:25:42.802812 4752 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 10:25:42 crc kubenswrapper[4752]: E0122 10:25:42.802907 4752 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 10:25:42 crc kubenswrapper[4752]: E0122 10:25:42.802928 4752 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 10:25:42 crc kubenswrapper[4752]: E0122 10:25:42.802954 4752 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:25:42 crc kubenswrapper[4752]: E0122 10:25:42.802992 4752 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 10:25:42 crc kubenswrapper[4752]: E0122 10:25:42.803021 4752 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:25:42 crc kubenswrapper[4752]: E0122 10:25:42.803075 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 10:25:46.803038573 +0000 UTC m=+26.032981521 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:25:42 crc kubenswrapper[4752]: E0122 10:25:42.803131 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 10:25:46.803098775 +0000 UTC m=+26.033041723 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.805324 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.805414 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.805434 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.805460 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.805478 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:42Z","lastTransitionTime":"2026-01-22T10:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.908919 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.908989 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.909015 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.909054 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:42 crc kubenswrapper[4752]: I0122 10:25:42.909080 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:42Z","lastTransitionTime":"2026-01-22T10:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.011887 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.011976 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.012002 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.012031 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.012056 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:43Z","lastTransitionTime":"2026-01-22T10:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.052594 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 04:45:36.179180735 +0000 UTC Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.097646 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.097669 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:25:43 crc kubenswrapper[4752]: E0122 10:25:43.097912 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.098136 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:25:43 crc kubenswrapper[4752]: E0122 10:25:43.098263 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:25:43 crc kubenswrapper[4752]: E0122 10:25:43.098500 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.114849 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.114953 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.114977 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.115005 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.115031 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:43Z","lastTransitionTime":"2026-01-22T10:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.217686 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.217739 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.217754 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.217772 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.217783 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:43Z","lastTransitionTime":"2026-01-22T10:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.320172 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.320222 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.320234 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.320251 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.320262 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:43Z","lastTransitionTime":"2026-01-22T10:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.423331 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.423396 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.423413 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.423438 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.423457 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:43Z","lastTransitionTime":"2026-01-22T10:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.526185 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.526271 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.526287 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.526315 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.526334 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:43Z","lastTransitionTime":"2026-01-22T10:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.629368 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.629443 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.629467 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.629495 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.629513 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:43Z","lastTransitionTime":"2026-01-22T10:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.732180 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.732291 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.732309 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.732331 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.732348 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:43Z","lastTransitionTime":"2026-01-22T10:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.835164 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.835242 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.835264 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.835291 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.835312 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:43Z","lastTransitionTime":"2026-01-22T10:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.938611 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.938657 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.938676 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.938701 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:43 crc kubenswrapper[4752]: I0122 10:25:43.938721 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:43Z","lastTransitionTime":"2026-01-22T10:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.040916 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.040981 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.040993 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.041018 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.041033 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:44Z","lastTransitionTime":"2026-01-22T10:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.053233 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 22:52:02.260750812 +0000 UTC Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.145092 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.145180 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.145207 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.145240 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.145264 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:44Z","lastTransitionTime":"2026-01-22T10:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.247791 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.247836 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.247846 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.247886 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.247898 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:44Z","lastTransitionTime":"2026-01-22T10:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.351380 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.351439 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.351519 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.351556 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.351576 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:44Z","lastTransitionTime":"2026-01-22T10:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.455202 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.455266 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.455290 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.455320 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.455341 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:44Z","lastTransitionTime":"2026-01-22T10:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.559070 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.559147 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.559171 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.559201 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.559221 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:44Z","lastTransitionTime":"2026-01-22T10:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.663696 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.663758 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.663769 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.663786 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.663798 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:44Z","lastTransitionTime":"2026-01-22T10:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.766968 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.767048 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.767072 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.767095 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.767115 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:44Z","lastTransitionTime":"2026-01-22T10:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.870066 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.870105 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.870113 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.870126 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.870135 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:44Z","lastTransitionTime":"2026-01-22T10:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.972517 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.972558 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.972567 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.972581 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:44 crc kubenswrapper[4752]: I0122 10:25:44.972592 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:44Z","lastTransitionTime":"2026-01-22T10:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.053413 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 22:46:35.39913037 +0000 UTC Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.075944 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.076027 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.076052 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.076085 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.076115 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:45Z","lastTransitionTime":"2026-01-22T10:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.097484 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.097568 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.097505 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:25:45 crc kubenswrapper[4752]: E0122 10:25:45.097692 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:25:45 crc kubenswrapper[4752]: E0122 10:25:45.097785 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:25:45 crc kubenswrapper[4752]: E0122 10:25:45.097889 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.179432 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.179493 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.179510 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.179536 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.179553 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:45Z","lastTransitionTime":"2026-01-22T10:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.282641 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.282722 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.282746 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.282776 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.282797 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:45Z","lastTransitionTime":"2026-01-22T10:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.385606 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.385685 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.385710 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.385778 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.385802 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:45Z","lastTransitionTime":"2026-01-22T10:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.488675 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.488750 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.488772 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.488801 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.488823 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:45Z","lastTransitionTime":"2026-01-22T10:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.591335 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.591397 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.591407 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.591420 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.591430 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:45Z","lastTransitionTime":"2026-01-22T10:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.694404 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.694463 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.694477 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.694495 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.694509 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:45Z","lastTransitionTime":"2026-01-22T10:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.797468 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.797529 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.797547 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.797580 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.797597 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:45Z","lastTransitionTime":"2026-01-22T10:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.905592 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.905660 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.905675 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.905696 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:45 crc kubenswrapper[4752]: I0122 10:25:45.905719 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:45Z","lastTransitionTime":"2026-01-22T10:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.008645 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.008685 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.008694 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.008707 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.008717 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:46Z","lastTransitionTime":"2026-01-22T10:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.054488 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 20:17:28.943034024 +0000 UTC Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.091089 4752 csr.go:261] certificate signing request csr-t9kxz is approved, waiting to be issued Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.111105 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.111130 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.111138 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.111150 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.111159 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:46Z","lastTransitionTime":"2026-01-22T10:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.113877 4752 csr.go:257] certificate signing request csr-t9kxz is issued Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.213809 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.213850 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.213875 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.213891 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.213899 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:46Z","lastTransitionTime":"2026-01-22T10:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.316285 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.316323 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.316332 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.316353 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.316365 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:46Z","lastTransitionTime":"2026-01-22T10:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.419798 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.419851 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.419900 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.419925 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.419943 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:46Z","lastTransitionTime":"2026-01-22T10:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.516478 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-6nmbt"] Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.516772 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.517976 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-prdjr"] Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.518126 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-prdjr" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.522449 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.522477 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.522486 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.522500 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.522510 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:46Z","lastTransitionTime":"2026-01-22T10:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.529250 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.529581 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.531353 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.532227 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.532754 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.533357 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.533498 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-v6hm8"] Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.533803 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.534724 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-6pbrv"] Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.535205 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.535356 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 22 10:25:46 crc kubenswrapper[4752]: W0122 10:25:46.545317 4752 reflector.go:561] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": failed to list *v1.Secret: secrets "multus-ancillary-tools-dockercfg-vnmsz" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 22 10:25:46 crc kubenswrapper[4752]: E0122 10:25:46.545370 4752 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-vnmsz\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"multus-ancillary-tools-dockercfg-vnmsz\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 10:25:46 crc kubenswrapper[4752]: W0122 10:25:46.545379 4752 reflector.go:561] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Jan 22 10:25:46 crc kubenswrapper[4752]: W0122 10:25:46.545322 4752 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": failed to list *v1.Secret: secrets "machine-config-daemon-dockercfg-r5tcq" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Jan 22 10:25:46 crc kubenswrapper[4752]: E0122 10:25:46.545407 4752 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-r5tcq\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-config-daemon-dockercfg-r5tcq\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 10:25:46 crc kubenswrapper[4752]: E0122 10:25:46.545417 4752 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 10:25:46 crc kubenswrapper[4752]: W0122 10:25:46.546539 4752 reflector.go:561] object-"openshift-machine-config-operator"/"proxy-tls": failed to list *v1.Secret: secrets "proxy-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Jan 22 10:25:46 crc kubenswrapper[4752]: E0122 10:25:46.546571 4752 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"proxy-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 10:25:46 crc kubenswrapper[4752]: W0122 10:25:46.546545 4752 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: configmaps "default-cni-sysctl-allowlist" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 22 10:25:46 crc kubenswrapper[4752]: E0122 10:25:46.546633 4752 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"default-cni-sysctl-allowlist\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.546636 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.546670 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.552194 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.580994 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:46Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.615334 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:46Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.624347 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.624390 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.624407 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.624427 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.624440 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:46Z","lastTransitionTime":"2026-01-22T10:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.638595 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxnnl\" (UniqueName: \"kubernetes.io/projected/8271e9d0-84de-47c5-82bb-35fd1af29e23-kube-api-access-gxnnl\") pod \"multus-additional-cni-plugins-6pbrv\" (UID: \"8271e9d0-84de-47c5-82bb-35fd1af29e23\") " pod="openshift-multus/multus-additional-cni-plugins-6pbrv" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.638637 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/01ba31a2-a4da-4736-8b30-1c4cf57e39fd-hosts-file\") pod \"node-resolver-prdjr\" (UID: \"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\") " pod="openshift-dns/node-resolver-prdjr" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.638656 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8271e9d0-84de-47c5-82bb-35fd1af29e23-system-cni-dir\") pod \"multus-additional-cni-plugins-6pbrv\" (UID: \"8271e9d0-84de-47c5-82bb-35fd1af29e23\") " pod="openshift-multus/multus-additional-cni-plugins-6pbrv" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.638693 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eb8df70c-9474-4827-8831-f39fc6883d79-mcd-auth-proxy-config\") pod \"machine-config-daemon-v6hm8\" (UID: \"eb8df70c-9474-4827-8831-f39fc6883d79\") " pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.638711 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/25322265-5a85-4c78-bf60-61836307404e-multus-daemon-config\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.638725 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/eb8df70c-9474-4827-8831-f39fc6883d79-rootfs\") pod \"machine-config-daemon-v6hm8\" (UID: \"eb8df70c-9474-4827-8831-f39fc6883d79\") " pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.638739 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffsqt\" (UniqueName: \"kubernetes.io/projected/eb8df70c-9474-4827-8831-f39fc6883d79-kube-api-access-ffsqt\") pod \"machine-config-daemon-v6hm8\" (UID: \"eb8df70c-9474-4827-8831-f39fc6883d79\") " pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.638754 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-cnibin\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.638770 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-hostroot\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.638786 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-multus-conf-dir\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.638814 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk788\" (UniqueName: \"kubernetes.io/projected/01ba31a2-a4da-4736-8b30-1c4cf57e39fd-kube-api-access-dk788\") pod \"node-resolver-prdjr\" (UID: \"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\") " pod="openshift-dns/node-resolver-prdjr" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.638833 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-host-run-multus-certs\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.638846 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-etc-kubernetes\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.638880 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8271e9d0-84de-47c5-82bb-35fd1af29e23-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6pbrv\" (UID: \"8271e9d0-84de-47c5-82bb-35fd1af29e23\") " pod="openshift-multus/multus-additional-cni-plugins-6pbrv" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.638893 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8271e9d0-84de-47c5-82bb-35fd1af29e23-cnibin\") pod \"multus-additional-cni-plugins-6pbrv\" (UID: \"8271e9d0-84de-47c5-82bb-35fd1af29e23\") " pod="openshift-multus/multus-additional-cni-plugins-6pbrv" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.638909 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/25322265-5a85-4c78-bf60-61836307404e-cni-binary-copy\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.638924 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-host-var-lib-cni-bin\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.638941 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-multus-socket-dir-parent\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.638955 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-host-run-k8s-cni-cncf-io\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.638969 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8271e9d0-84de-47c5-82bb-35fd1af29e23-os-release\") pod \"multus-additional-cni-plugins-6pbrv\" (UID: \"8271e9d0-84de-47c5-82bb-35fd1af29e23\") " pod="openshift-multus/multus-additional-cni-plugins-6pbrv" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.638984 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-multus-cni-dir\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.638998 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-host-var-lib-kubelet\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.639012 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eb8df70c-9474-4827-8831-f39fc6883d79-proxy-tls\") pod \"machine-config-daemon-v6hm8\" (UID: \"eb8df70c-9474-4827-8831-f39fc6883d79\") " pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.639034 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-host-var-lib-cni-multus\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.639049 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8271e9d0-84de-47c5-82bb-35fd1af29e23-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6pbrv\" (UID: \"8271e9d0-84de-47c5-82bb-35fd1af29e23\") " pod="openshift-multus/multus-additional-cni-plugins-6pbrv" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.639063 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ww26\" (UniqueName: \"kubernetes.io/projected/25322265-5a85-4c78-bf60-61836307404e-kube-api-access-5ww26\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.639077 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-system-cni-dir\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.639093 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-os-release\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.639115 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8271e9d0-84de-47c5-82bb-35fd1af29e23-cni-binary-copy\") pod \"multus-additional-cni-plugins-6pbrv\" (UID: \"8271e9d0-84de-47c5-82bb-35fd1af29e23\") " pod="openshift-multus/multus-additional-cni-plugins-6pbrv" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.639129 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-host-run-netns\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.663774 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:46Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.688523 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:46Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.720043 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:46Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.726673 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.726725 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.726764 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.726784 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.726796 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:46Z","lastTransitionTime":"2026-01-22T10:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.739754 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.739848 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8271e9d0-84de-47c5-82bb-35fd1af29e23-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6pbrv\" (UID: \"8271e9d0-84de-47c5-82bb-35fd1af29e23\") " pod="openshift-multus/multus-additional-cni-plugins-6pbrv" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.739890 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ww26\" (UniqueName: \"kubernetes.io/projected/25322265-5a85-4c78-bf60-61836307404e-kube-api-access-5ww26\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: E0122 10:25:46.739951 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:25:54.739924204 +0000 UTC m=+33.969867112 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740005 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-system-cni-dir\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740028 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-os-release\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740063 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8271e9d0-84de-47c5-82bb-35fd1af29e23-cni-binary-copy\") pod \"multus-additional-cni-plugins-6pbrv\" (UID: \"8271e9d0-84de-47c5-82bb-35fd1af29e23\") " pod="openshift-multus/multus-additional-cni-plugins-6pbrv" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740080 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-host-run-netns\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740097 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/01ba31a2-a4da-4736-8b30-1c4cf57e39fd-hosts-file\") pod \"node-resolver-prdjr\" (UID: \"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\") " pod="openshift-dns/node-resolver-prdjr" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740113 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxnnl\" (UniqueName: \"kubernetes.io/projected/8271e9d0-84de-47c5-82bb-35fd1af29e23-kube-api-access-gxnnl\") pod \"multus-additional-cni-plugins-6pbrv\" (UID: \"8271e9d0-84de-47c5-82bb-35fd1af29e23\") " pod="openshift-multus/multus-additional-cni-plugins-6pbrv" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740109 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-system-cni-dir\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740136 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8271e9d0-84de-47c5-82bb-35fd1af29e23-system-cni-dir\") pod \"multus-additional-cni-plugins-6pbrv\" (UID: \"8271e9d0-84de-47c5-82bb-35fd1af29e23\") " pod="openshift-multus/multus-additional-cni-plugins-6pbrv" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740132 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-os-release\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740154 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eb8df70c-9474-4827-8831-f39fc6883d79-mcd-auth-proxy-config\") pod \"machine-config-daemon-v6hm8\" (UID: \"eb8df70c-9474-4827-8831-f39fc6883d79\") " pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740170 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/25322265-5a85-4c78-bf60-61836307404e-multus-daemon-config\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740187 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/eb8df70c-9474-4827-8831-f39fc6883d79-rootfs\") pod \"machine-config-daemon-v6hm8\" (UID: \"eb8df70c-9474-4827-8831-f39fc6883d79\") " pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740193 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/01ba31a2-a4da-4736-8b30-1c4cf57e39fd-hosts-file\") pod \"node-resolver-prdjr\" (UID: \"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\") " pod="openshift-dns/node-resolver-prdjr" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740203 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-multus-conf-dir\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740223 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-host-run-netns\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740233 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk788\" (UniqueName: \"kubernetes.io/projected/01ba31a2-a4da-4736-8b30-1c4cf57e39fd-kube-api-access-dk788\") pod \"node-resolver-prdjr\" (UID: \"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\") " pod="openshift-dns/node-resolver-prdjr" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740249 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffsqt\" (UniqueName: \"kubernetes.io/projected/eb8df70c-9474-4827-8831-f39fc6883d79-kube-api-access-ffsqt\") pod \"machine-config-daemon-v6hm8\" (UID: \"eb8df70c-9474-4827-8831-f39fc6883d79\") " pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740263 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-cnibin\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740278 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-hostroot\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740299 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740314 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-host-run-multus-certs\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740331 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8271e9d0-84de-47c5-82bb-35fd1af29e23-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6pbrv\" (UID: \"8271e9d0-84de-47c5-82bb-35fd1af29e23\") " pod="openshift-multus/multus-additional-cni-plugins-6pbrv" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740346 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-etc-kubernetes\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740361 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8271e9d0-84de-47c5-82bb-35fd1af29e23-cnibin\") pod \"multus-additional-cni-plugins-6pbrv\" (UID: \"8271e9d0-84de-47c5-82bb-35fd1af29e23\") " pod="openshift-multus/multus-additional-cni-plugins-6pbrv" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740378 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/25322265-5a85-4c78-bf60-61836307404e-cni-binary-copy\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740392 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-host-var-lib-cni-bin\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740408 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-multus-socket-dir-parent\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740424 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-host-run-k8s-cni-cncf-io\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740440 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8271e9d0-84de-47c5-82bb-35fd1af29e23-os-release\") pod \"multus-additional-cni-plugins-6pbrv\" (UID: \"8271e9d0-84de-47c5-82bb-35fd1af29e23\") " pod="openshift-multus/multus-additional-cni-plugins-6pbrv" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740454 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-multus-cni-dir\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740469 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eb8df70c-9474-4827-8831-f39fc6883d79-proxy-tls\") pod \"machine-config-daemon-v6hm8\" (UID: \"eb8df70c-9474-4827-8831-f39fc6883d79\") " pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740482 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-host-var-lib-cni-multus\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740496 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-host-var-lib-kubelet\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740563 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-host-var-lib-kubelet\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740755 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8271e9d0-84de-47c5-82bb-35fd1af29e23-cni-binary-copy\") pod \"multus-additional-cni-plugins-6pbrv\" (UID: \"8271e9d0-84de-47c5-82bb-35fd1af29e23\") " pod="openshift-multus/multus-additional-cni-plugins-6pbrv" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740880 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eb8df70c-9474-4827-8831-f39fc6883d79-mcd-auth-proxy-config\") pod \"machine-config-daemon-v6hm8\" (UID: \"eb8df70c-9474-4827-8831-f39fc6883d79\") " pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740889 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8271e9d0-84de-47c5-82bb-35fd1af29e23-system-cni-dir\") pod \"multus-additional-cni-plugins-6pbrv\" (UID: \"8271e9d0-84de-47c5-82bb-35fd1af29e23\") " pod="openshift-multus/multus-additional-cni-plugins-6pbrv" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740927 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-etc-kubernetes\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740930 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-cnibin\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740956 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-hostroot\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740956 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/eb8df70c-9474-4827-8831-f39fc6883d79-rootfs\") pod \"machine-config-daemon-v6hm8\" (UID: \"eb8df70c-9474-4827-8831-f39fc6883d79\") " pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.740973 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-multus-conf-dir\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: E0122 10:25:46.741047 4752 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 10:25:46 crc kubenswrapper[4752]: E0122 10:25:46.741100 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 10:25:54.741081964 +0000 UTC m=+33.971024972 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.741119 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-host-run-k8s-cni-cncf-io\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.741148 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8271e9d0-84de-47c5-82bb-35fd1af29e23-cnibin\") pod \"multus-additional-cni-plugins-6pbrv\" (UID: \"8271e9d0-84de-47c5-82bb-35fd1af29e23\") " pod="openshift-multus/multus-additional-cni-plugins-6pbrv" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.741255 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/25322265-5a85-4c78-bf60-61836307404e-multus-daemon-config\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.741331 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8271e9d0-84de-47c5-82bb-35fd1af29e23-os-release\") pod \"multus-additional-cni-plugins-6pbrv\" (UID: \"8271e9d0-84de-47c5-82bb-35fd1af29e23\") " pod="openshift-multus/multus-additional-cni-plugins-6pbrv" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.741347 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-host-run-multus-certs\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.741387 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-multus-cni-dir\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.741439 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-host-var-lib-cni-bin\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.741496 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-multus-socket-dir-parent\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.741541 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/25322265-5a85-4c78-bf60-61836307404e-cni-binary-copy\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.741565 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/25322265-5a85-4c78-bf60-61836307404e-host-var-lib-cni-multus\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.741584 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8271e9d0-84de-47c5-82bb-35fd1af29e23-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6pbrv\" (UID: \"8271e9d0-84de-47c5-82bb-35fd1af29e23\") " pod="openshift-multus/multus-additional-cni-plugins-6pbrv" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.742674 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:46Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.759889 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:46Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.764526 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk788\" (UniqueName: \"kubernetes.io/projected/01ba31a2-a4da-4736-8b30-1c4cf57e39fd-kube-api-access-dk788\") pod \"node-resolver-prdjr\" (UID: \"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\") " pod="openshift-dns/node-resolver-prdjr" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.767263 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ww26\" (UniqueName: \"kubernetes.io/projected/25322265-5a85-4c78-bf60-61836307404e-kube-api-access-5ww26\") pod \"multus-6nmbt\" (UID: \"25322265-5a85-4c78-bf60-61836307404e\") " pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.773478 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxnnl\" (UniqueName: \"kubernetes.io/projected/8271e9d0-84de-47c5-82bb-35fd1af29e23-kube-api-access-gxnnl\") pod \"multus-additional-cni-plugins-6pbrv\" (UID: \"8271e9d0-84de-47c5-82bb-35fd1af29e23\") " pod="openshift-multus/multus-additional-cni-plugins-6pbrv" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.775952 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:46Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.789497 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:46Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.817500 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:46Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.828604 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.828633 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.828641 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.828653 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.828663 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:46Z","lastTransitionTime":"2026-01-22T10:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.833040 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6nmbt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.836732 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:46Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.841457 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.841518 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.841593 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:25:46 crc kubenswrapper[4752]: E0122 10:25:46.841709 4752 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 10:25:46 crc kubenswrapper[4752]: E0122 10:25:46.841752 4752 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 10:25:46 crc kubenswrapper[4752]: E0122 10:25:46.841781 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 10:25:54.841763972 +0000 UTC m=+34.071706890 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 10:25:46 crc kubenswrapper[4752]: E0122 10:25:46.841790 4752 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 10:25:46 crc kubenswrapper[4752]: E0122 10:25:46.841807 4752 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:25:46 crc kubenswrapper[4752]: E0122 10:25:46.841829 4752 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 10:25:46 crc kubenswrapper[4752]: E0122 10:25:46.841887 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 10:25:54.841869695 +0000 UTC m=+34.071812603 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:25:46 crc kubenswrapper[4752]: E0122 10:25:46.841903 4752 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 10:25:46 crc kubenswrapper[4752]: E0122 10:25:46.841924 4752 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:25:46 crc kubenswrapper[4752]: E0122 10:25:46.842017 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 10:25:54.841987488 +0000 UTC m=+34.071930586 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.846533 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-prdjr" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.856471 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:46Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:46 crc kubenswrapper[4752]: W0122 10:25:46.859125 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01ba31a2_a4da_4736_8b30_1c4cf57e39fd.slice/crio-521fcd759000a6e046f27554734b23fa8a16d682222490b3347b166b3018ef0a WatchSource:0}: Error finding container 521fcd759000a6e046f27554734b23fa8a16d682222490b3347b166b3018ef0a: Status 404 returned error can't find the container with id 521fcd759000a6e046f27554734b23fa8a16d682222490b3347b166b3018ef0a Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.874948 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:46Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.898730 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:46Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.917583 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:46Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.925525 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-784rk"] Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.926320 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.929113 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.929168 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.929561 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.929594 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.930154 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.931017 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.931431 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.933336 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.933356 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.933370 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.933383 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.933392 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:46Z","lastTransitionTime":"2026-01-22T10:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.936656 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:46Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.948329 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:46Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.961476 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:46Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.977825 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:46Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:46 crc kubenswrapper[4752]: I0122 10:25:46.991762 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:46Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.006926 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.037769 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.039158 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.039192 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.039203 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.039217 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.039226 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:47Z","lastTransitionTime":"2026-01-22T10:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.043463 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-run-systemd\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.043507 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-kubelet\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.043522 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-run-ovn-kubernetes\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.043541 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-cni-bin\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.043557 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-cni-netd\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.043572 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb6gs\" (UniqueName: \"kubernetes.io/projected/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-kube-api-access-sb6gs\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.043591 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-etc-openvswitch\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.043704 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-run-ovn\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.043801 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-env-overrides\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.043846 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-run-netns\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.043902 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-log-socket\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.043947 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-slash\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.043967 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-run-openvswitch\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.043987 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.044006 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-ovnkube-script-lib\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.044027 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-var-lib-openvswitch\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.044051 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-systemd-units\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.044071 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-ovnkube-config\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.044085 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-ovn-node-metrics-cert\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.044112 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-node-log\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.054907 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 19:01:11.186971683 +0000 UTC Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.083581 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.097909 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.097998 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:25:47 crc kubenswrapper[4752]: E0122 10:25:47.098037 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:25:47 crc kubenswrapper[4752]: E0122 10:25:47.098119 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.098188 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:25:47 crc kubenswrapper[4752]: E0122 10:25:47.098235 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.108774 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.115469 4752 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-22 10:20:46 +0000 UTC, rotation deadline is 2026-11-26 14:47:45.187706921 +0000 UTC Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.115537 4752 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7396h21m58.072172727s for next certificate rotation Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.133166 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.141466 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.141502 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.141511 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.141526 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.141536 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:47Z","lastTransitionTime":"2026-01-22T10:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.144935 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-systemd-units\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.144968 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-ovnkube-config\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.144984 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-ovn-node-metrics-cert\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.145001 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-node-log\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.145016 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-run-systemd\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.145039 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-kubelet\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.145054 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-run-ovn-kubernetes\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.145069 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-cni-netd\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.145089 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-cni-bin\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.145104 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb6gs\" (UniqueName: \"kubernetes.io/projected/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-kube-api-access-sb6gs\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.145119 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-etc-openvswitch\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.145117 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-node-log\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.145155 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-cni-bin\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.145174 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-run-ovn\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.145174 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-run-ovn-kubernetes\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.145195 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-cni-netd\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.145124 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-run-systemd\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.145203 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-etc-openvswitch\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.145066 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-systemd-units\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.145135 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-run-ovn\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.145233 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-kubelet\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.145256 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-env-overrides\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.145346 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-run-netns\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.145369 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-run-netns\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.145374 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-log-socket\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.145399 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-log-socket\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.145408 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-slash\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.145425 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-run-openvswitch\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.145441 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.145456 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-ovnkube-script-lib\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.145475 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-var-lib-openvswitch\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.145500 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-run-openvswitch\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.145523 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-var-lib-openvswitch\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.145527 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-slash\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.145550 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.145776 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-env-overrides\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.145827 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-ovnkube-config\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.146089 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-ovnkube-script-lib\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.148591 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-ovn-node-metrics-cert\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.161174 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.168444 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb6gs\" (UniqueName: \"kubernetes.io/projected/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-kube-api-access-sb6gs\") pod \"ovnkube-node-784rk\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.191552 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.208554 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.219091 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.232168 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.237945 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.243734 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.243782 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.243794 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.243811 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.243824 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:47Z","lastTransitionTime":"2026-01-22T10:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.245469 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:47 crc kubenswrapper[4752]: W0122 10:25:47.248408 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdaf9138_3ac1_4555_93c0_c8ddc3ef2c25.slice/crio-a48cec0bb0d694184188ffe2af6c7bcac6910a68a0f4d7d7b96f7e40ade0e9f3 WatchSource:0}: Error finding container a48cec0bb0d694184188ffe2af6c7bcac6910a68a0f4d7d7b96f7e40ade0e9f3: Status 404 returned error can't find the container with id a48cec0bb0d694184188ffe2af6c7bcac6910a68a0f4d7d7b96f7e40ade0e9f3 Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.263509 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.272581 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" event={"ID":"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25","Type":"ContainerStarted","Data":"a48cec0bb0d694184188ffe2af6c7bcac6910a68a0f4d7d7b96f7e40ade0e9f3"} Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.273663 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-prdjr" event={"ID":"01ba31a2-a4da-4736-8b30-1c4cf57e39fd","Type":"ContainerStarted","Data":"1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8"} Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.273723 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-prdjr" event={"ID":"01ba31a2-a4da-4736-8b30-1c4cf57e39fd","Type":"ContainerStarted","Data":"521fcd759000a6e046f27554734b23fa8a16d682222490b3347b166b3018ef0a"} Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.274976 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6nmbt" event={"ID":"25322265-5a85-4c78-bf60-61836307404e","Type":"ContainerStarted","Data":"6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e"} Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.275007 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6nmbt" event={"ID":"25322265-5a85-4c78-bf60-61836307404e","Type":"ContainerStarted","Data":"6398cc6367e7604387fe1b17249d1d9748e4cb0ec8d297cc16133a359773ca35"} Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.276571 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.288517 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.299795 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.309734 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.322516 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.334675 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.345959 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.346017 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.346026 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.346042 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.346053 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:47Z","lastTransitionTime":"2026-01-22T10:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.353970 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.366069 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.379101 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.380107 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.384801 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eb8df70c-9474-4827-8831-f39fc6883d79-proxy-tls\") pod \"machine-config-daemon-v6hm8\" (UID: \"eb8df70c-9474-4827-8831-f39fc6883d79\") " pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.391504 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.401087 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.416808 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.428126 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.445650 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.448409 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.448454 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.448468 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.448483 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.448496 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:47Z","lastTransitionTime":"2026-01-22T10:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.458327 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.468638 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.482047 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.495201 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.510155 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:47Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.552483 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.552525 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.552546 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.552566 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.552579 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:47Z","lastTransitionTime":"2026-01-22T10:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.655002 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.655051 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.655062 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.655078 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.655090 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:47Z","lastTransitionTime":"2026-01-22T10:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:47 crc kubenswrapper[4752]: E0122 10:25:47.742256 4752 configmap.go:193] Couldn't get configMap openshift-multus/default-cni-sysctl-allowlist: failed to sync configmap cache: timed out waiting for the condition Jan 22 10:25:47 crc kubenswrapper[4752]: E0122 10:25:47.742344 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8271e9d0-84de-47c5-82bb-35fd1af29e23-cni-sysctl-allowlist podName:8271e9d0-84de-47c5-82bb-35fd1af29e23 nodeName:}" failed. No retries permitted until 2026-01-22 10:25:48.242326141 +0000 UTC m=+27.472269039 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/8271e9d0-84de-47c5-82bb-35fd1af29e23-cni-sysctl-allowlist") pod "multus-additional-cni-plugins-6pbrv" (UID: "8271e9d0-84de-47c5-82bb-35fd1af29e23") : failed to sync configmap cache: timed out waiting for the condition Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.757628 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.757686 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.757695 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.757709 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.757719 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:47Z","lastTransitionTime":"2026-01-22T10:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:47 crc kubenswrapper[4752]: E0122 10:25:47.757832 4752 projected.go:288] Couldn't get configMap openshift-machine-config-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 22 10:25:47 crc kubenswrapper[4752]: E0122 10:25:47.757868 4752 projected.go:194] Error preparing data for projected volume kube-api-access-ffsqt for pod openshift-machine-config-operator/machine-config-daemon-v6hm8: failed to sync configmap cache: timed out waiting for the condition Jan 22 10:25:47 crc kubenswrapper[4752]: E0122 10:25:47.757923 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eb8df70c-9474-4827-8831-f39fc6883d79-kube-api-access-ffsqt podName:eb8df70c-9474-4827-8831-f39fc6883d79 nodeName:}" failed. No retries permitted until 2026-01-22 10:25:48.257908454 +0000 UTC m=+27.487851352 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ffsqt" (UniqueName: "kubernetes.io/projected/eb8df70c-9474-4827-8831-f39fc6883d79-kube-api-access-ffsqt") pod "machine-config-daemon-v6hm8" (UID: "eb8df70c-9474-4827-8831-f39fc6883d79") : failed to sync configmap cache: timed out waiting for the condition Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.832502 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.859532 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.859560 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.859594 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.859606 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.859615 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:47Z","lastTransitionTime":"2026-01-22T10:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.894116 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.962969 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.963231 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.963242 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.963261 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:47 crc kubenswrapper[4752]: I0122 10:25:47.963272 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:47Z","lastTransitionTime":"2026-01-22T10:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.013996 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.041497 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.055428 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 11:19:50.165294237 +0000 UTC Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.065515 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.065559 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.065571 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.065586 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.065597 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:48Z","lastTransitionTime":"2026-01-22T10:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.168129 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.168173 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.168185 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.168202 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.168216 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:48Z","lastTransitionTime":"2026-01-22T10:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.255146 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8271e9d0-84de-47c5-82bb-35fd1af29e23-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6pbrv\" (UID: \"8271e9d0-84de-47c5-82bb-35fd1af29e23\") " pod="openshift-multus/multus-additional-cni-plugins-6pbrv" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.255709 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8271e9d0-84de-47c5-82bb-35fd1af29e23-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6pbrv\" (UID: \"8271e9d0-84de-47c5-82bb-35fd1af29e23\") " pod="openshift-multus/multus-additional-cni-plugins-6pbrv" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.270845 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.270921 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.270934 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.270956 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.270969 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:48Z","lastTransitionTime":"2026-01-22T10:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.277532 4752 generic.go:334] "Generic (PLEG): container finished" podID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerID="fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941" exitCode=0 Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.277574 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" event={"ID":"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25","Type":"ContainerDied","Data":"fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941"} Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.290750 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:48Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.308286 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:48Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.320849 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:48Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.344542 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:48Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.356042 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffsqt\" (UniqueName: \"kubernetes.io/projected/eb8df70c-9474-4827-8831-f39fc6883d79-kube-api-access-ffsqt\") pod \"machine-config-daemon-v6hm8\" (UID: \"eb8df70c-9474-4827-8831-f39fc6883d79\") " pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.359693 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffsqt\" (UniqueName: \"kubernetes.io/projected/eb8df70c-9474-4827-8831-f39fc6883d79-kube-api-access-ffsqt\") pod \"machine-config-daemon-v6hm8\" (UID: \"eb8df70c-9474-4827-8831-f39fc6883d79\") " pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.363193 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:48Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.363466 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.373549 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.373597 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.373614 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.373636 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.373652 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:48Z","lastTransitionTime":"2026-01-22T10:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.388809 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:48Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.403464 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:48Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.417577 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:48Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.433563 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:48Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.445786 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:48Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.462205 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:48Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.477371 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.477415 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.477424 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.477438 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.477448 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:48Z","lastTransitionTime":"2026-01-22T10:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.485256 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:48Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.499609 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:48Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.513172 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:48Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.579630 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.579659 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.579666 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.579678 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.579687 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:48Z","lastTransitionTime":"2026-01-22T10:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.655361 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 10:25:48 crc kubenswrapper[4752]: W0122 10:25:48.672224 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb8df70c_9474_4827_8831_f39fc6883d79.slice/crio-07a751db3563595f3459dd9deff3d21d98e60f3f8dc53962c1b45be73cc8620b WatchSource:0}: Error finding container 07a751db3563595f3459dd9deff3d21d98e60f3f8dc53962c1b45be73cc8620b: Status 404 returned error can't find the container with id 07a751db3563595f3459dd9deff3d21d98e60f3f8dc53962c1b45be73cc8620b Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.681501 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.681531 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.681543 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.681559 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.681571 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:48Z","lastTransitionTime":"2026-01-22T10:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.783184 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.783220 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.783231 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.783246 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.783256 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:48Z","lastTransitionTime":"2026-01-22T10:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.885846 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.885907 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.885917 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.885936 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.885949 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:48Z","lastTransitionTime":"2026-01-22T10:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.987735 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.987776 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.987789 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.987806 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:48 crc kubenswrapper[4752]: I0122 10:25:48.987817 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:48Z","lastTransitionTime":"2026-01-22T10:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.055604 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 14:55:23.958925937 +0000 UTC Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.086778 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.090224 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.090259 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.090271 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.090286 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.090297 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:49Z","lastTransitionTime":"2026-01-22T10:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.100254 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:25:49 crc kubenswrapper[4752]: E0122 10:25:49.100359 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.100694 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:25:49 crc kubenswrapper[4752]: E0122 10:25:49.100752 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.100797 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:25:49 crc kubenswrapper[4752]: E0122 10:25:49.100847 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.105048 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:49Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.123455 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:49Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.139343 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:49Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.153146 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:49Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.170487 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:49Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.189772 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:49Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.192174 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.192241 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.192259 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.192283 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.192299 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:49Z","lastTransitionTime":"2026-01-22T10:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.202639 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:49Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.218372 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:49Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.231341 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:49Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.243319 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:49Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.252930 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:49Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.266732 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:49Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.278163 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:49Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.285298 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" event={"ID":"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25","Type":"ContainerStarted","Data":"fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12"} Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.285360 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" event={"ID":"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25","Type":"ContainerStarted","Data":"8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b"} Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.285377 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" event={"ID":"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25","Type":"ContainerStarted","Data":"8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4"} Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.285393 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" event={"ID":"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25","Type":"ContainerStarted","Data":"4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616"} Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.285407 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" event={"ID":"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25","Type":"ContainerStarted","Data":"b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed"} Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.285422 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" event={"ID":"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25","Type":"ContainerStarted","Data":"4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416"} Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.286918 4752 generic.go:334] "Generic (PLEG): container finished" podID="8271e9d0-84de-47c5-82bb-35fd1af29e23" containerID="82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552" exitCode=0 Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.286974 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" event={"ID":"8271e9d0-84de-47c5-82bb-35fd1af29e23","Type":"ContainerDied","Data":"82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552"} Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.287034 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" event={"ID":"8271e9d0-84de-47c5-82bb-35fd1af29e23","Type":"ContainerStarted","Data":"6496fb6975e50669489e8d7e8f1d52a111089ff5bec62947309f4a7e8ce9fb75"} Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.290284 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerStarted","Data":"5d1f73a22b61b7214f6d3f60b924dd1f2c82ecafa6190f3dcdceceea8d00ece0"} Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.290332 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerStarted","Data":"d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4"} Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.290346 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerStarted","Data":"07a751db3563595f3459dd9deff3d21d98e60f3f8dc53962c1b45be73cc8620b"} Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.294684 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.294722 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.294734 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.294752 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.294765 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:49Z","lastTransitionTime":"2026-01-22T10:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.329606 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:49Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.357506 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:49Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.372023 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:49Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.387057 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:49Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.397031 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.397063 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.397073 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.397091 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.397102 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:49Z","lastTransitionTime":"2026-01-22T10:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.402801 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:49Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.414368 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:49Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.428082 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:49Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.438284 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:49Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.449394 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d1f73a22b61b7214f6d3f60b924dd1f2c82ecafa6190f3dcdceceea8d00ece0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:49Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.469654 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:49Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.481349 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:49Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.499571 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:49Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.501074 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.501101 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.501109 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.501123 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.501130 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:49Z","lastTransitionTime":"2026-01-22T10:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.509909 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:49Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.521939 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:49Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.530886 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:49Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.603801 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.603831 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.603842 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.603879 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.603890 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:49Z","lastTransitionTime":"2026-01-22T10:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.706098 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.706373 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.706500 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.706580 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.706648 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:49Z","lastTransitionTime":"2026-01-22T10:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.808513 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.808566 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.808586 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.808607 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.808625 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:49Z","lastTransitionTime":"2026-01-22T10:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.911901 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.912155 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.912165 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.912178 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:49 crc kubenswrapper[4752]: I0122 10:25:49.912188 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:49Z","lastTransitionTime":"2026-01-22T10:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.014319 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.014382 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.014399 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.014430 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.014455 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:50Z","lastTransitionTime":"2026-01-22T10:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.056463 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 21:00:16.595781953 +0000 UTC Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.117130 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.117188 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.117203 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.117222 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.117239 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:50Z","lastTransitionTime":"2026-01-22T10:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.219776 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.219824 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.219835 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.219884 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.219900 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:50Z","lastTransitionTime":"2026-01-22T10:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.296422 4752 generic.go:334] "Generic (PLEG): container finished" podID="8271e9d0-84de-47c5-82bb-35fd1af29e23" containerID="2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc" exitCode=0 Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.296497 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" event={"ID":"8271e9d0-84de-47c5-82bb-35fd1af29e23","Type":"ContainerDied","Data":"2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc"} Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.312373 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:50Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.323084 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.323133 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.323142 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.323161 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.323174 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:50Z","lastTransitionTime":"2026-01-22T10:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.329510 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:50Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.348323 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:50Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.362451 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d1f73a22b61b7214f6d3f60b924dd1f2c82ecafa6190f3dcdceceea8d00ece0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:50Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.383765 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:50Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.407761 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:50Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.424952 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:50Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.427936 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.427983 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.427996 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.428014 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.428026 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:50Z","lastTransitionTime":"2026-01-22T10:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.440006 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:50Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.459711 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:50Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.475452 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:50Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.488325 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:50Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.507434 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:50Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.524099 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:50Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.529757 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.529818 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.529835 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.529881 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.529900 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:50Z","lastTransitionTime":"2026-01-22T10:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.546063 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:50Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.632214 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.632271 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.632289 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.632312 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.632329 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:50Z","lastTransitionTime":"2026-01-22T10:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.735762 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.735843 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.735910 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.735944 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.735969 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:50Z","lastTransitionTime":"2026-01-22T10:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.839579 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.839638 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.839659 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.839683 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.839701 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:50Z","lastTransitionTime":"2026-01-22T10:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.942751 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.942809 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.942832 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.942893 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.942916 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:50Z","lastTransitionTime":"2026-01-22T10:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.946037 4752 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.979395 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-6v582"] Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.979929 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-6v582" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.981962 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.982341 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.982600 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.983808 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 22 10:25:50 crc kubenswrapper[4752]: I0122 10:25:50.997578 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:50Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.016675 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.040495 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.044870 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.044947 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.044960 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.044978 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.044991 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:51Z","lastTransitionTime":"2026-01-22T10:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.054370 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.056715 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 07:34:30.019071156 +0000 UTC Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.067463 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.081472 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.088304 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8db1e85a-38c2-41d7-8b2e-684b64e813be-host\") pod \"node-ca-6v582\" (UID: \"8db1e85a-38c2-41d7-8b2e-684b64e813be\") " pod="openshift-image-registry/node-ca-6v582" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.088386 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj7kc\" (UniqueName: \"kubernetes.io/projected/8db1e85a-38c2-41d7-8b2e-684b64e813be-kube-api-access-mj7kc\") pod \"node-ca-6v582\" (UID: \"8db1e85a-38c2-41d7-8b2e-684b64e813be\") " pod="openshift-image-registry/node-ca-6v582" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.088411 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8db1e85a-38c2-41d7-8b2e-684b64e813be-serviceca\") pod \"node-ca-6v582\" (UID: \"8db1e85a-38c2-41d7-8b2e-684b64e813be\") " pod="openshift-image-registry/node-ca-6v582" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.097475 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.097506 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.097535 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:25:51 crc kubenswrapper[4752]: E0122 10:25:51.097738 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:25:51 crc kubenswrapper[4752]: E0122 10:25:51.097804 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:25:51 crc kubenswrapper[4752]: E0122 10:25:51.097997 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.102841 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.124677 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.136784 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6v582" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db1e85a-38c2-41d7-8b2e-684b64e813be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mj7kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6v582\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.146905 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.146962 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.146977 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.146992 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.147005 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:51Z","lastTransitionTime":"2026-01-22T10:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.152024 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.164985 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.185134 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.189229 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj7kc\" (UniqueName: \"kubernetes.io/projected/8db1e85a-38c2-41d7-8b2e-684b64e813be-kube-api-access-mj7kc\") pod \"node-ca-6v582\" (UID: \"8db1e85a-38c2-41d7-8b2e-684b64e813be\") " pod="openshift-image-registry/node-ca-6v582" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.189277 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8db1e85a-38c2-41d7-8b2e-684b64e813be-serviceca\") pod \"node-ca-6v582\" (UID: \"8db1e85a-38c2-41d7-8b2e-684b64e813be\") " pod="openshift-image-registry/node-ca-6v582" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.190079 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8db1e85a-38c2-41d7-8b2e-684b64e813be-host\") pod \"node-ca-6v582\" (UID: \"8db1e85a-38c2-41d7-8b2e-684b64e813be\") " pod="openshift-image-registry/node-ca-6v582" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.190156 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8db1e85a-38c2-41d7-8b2e-684b64e813be-host\") pod \"node-ca-6v582\" (UID: \"8db1e85a-38c2-41d7-8b2e-684b64e813be\") " pod="openshift-image-registry/node-ca-6v582" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.190738 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8db1e85a-38c2-41d7-8b2e-684b64e813be-serviceca\") pod \"node-ca-6v582\" (UID: \"8db1e85a-38c2-41d7-8b2e-684b64e813be\") " pod="openshift-image-registry/node-ca-6v582" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.202083 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.214283 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d1f73a22b61b7214f6d3f60b924dd1f2c82ecafa6190f3dcdceceea8d00ece0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.219182 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj7kc\" (UniqueName: \"kubernetes.io/projected/8db1e85a-38c2-41d7-8b2e-684b64e813be-kube-api-access-mj7kc\") pod \"node-ca-6v582\" (UID: \"8db1e85a-38c2-41d7-8b2e-684b64e813be\") " pod="openshift-image-registry/node-ca-6v582" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.231342 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.244548 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.248819 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.248997 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.249099 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.249196 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.249292 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:51Z","lastTransitionTime":"2026-01-22T10:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.258658 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.271756 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.286529 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d1f73a22b61b7214f6d3f60b924dd1f2c82ecafa6190f3dcdceceea8d00ece0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.295513 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-6v582" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.305081 4752 generic.go:334] "Generic (PLEG): container finished" podID="8271e9d0-84de-47c5-82bb-35fd1af29e23" containerID="d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2" exitCode=0 Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.305222 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" event={"ID":"8271e9d0-84de-47c5-82bb-35fd1af29e23","Type":"ContainerDied","Data":"d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2"} Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.306558 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: W0122 10:25:51.310608 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8db1e85a_38c2_41d7_8b2e_684b64e813be.slice/crio-92666edb85fea77e7a08fe7f71004b4e5f3a2c38e1bde2f0001a7fbeaee40ee0 WatchSource:0}: Error finding container 92666edb85fea77e7a08fe7f71004b4e5f3a2c38e1bde2f0001a7fbeaee40ee0: Status 404 returned error can't find the container with id 92666edb85fea77e7a08fe7f71004b4e5f3a2c38e1bde2f0001a7fbeaee40ee0 Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.312922 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" event={"ID":"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25","Type":"ContainerStarted","Data":"1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864"} Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.330425 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.345560 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.352914 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.352961 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.352980 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.353004 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.353023 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:51Z","lastTransitionTime":"2026-01-22T10:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.358007 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.372548 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.390655 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.403325 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.422806 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.437636 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.456118 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.456157 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.456169 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.456186 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.456198 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:51Z","lastTransitionTime":"2026-01-22T10:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.456503 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.467936 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6v582" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db1e85a-38c2-41d7-8b2e-684b64e813be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mj7kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6v582\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.482080 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.499801 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.519158 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.536920 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.549604 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.558727 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.558761 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.558771 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.558786 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.558796 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:51Z","lastTransitionTime":"2026-01-22T10:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.563265 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.579246 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.595564 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.605547 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6v582" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db1e85a-38c2-41d7-8b2e-684b64e813be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mj7kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6v582\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.618227 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.628711 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.640313 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.654336 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.660710 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.660749 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.660762 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.660778 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.660790 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:51Z","lastTransitionTime":"2026-01-22T10:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.668697 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d1f73a22b61b7214f6d3f60b924dd1f2c82ecafa6190f3dcdceceea8d00ece0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.682220 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:51Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.763249 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.763283 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.763471 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.763486 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.763496 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:51Z","lastTransitionTime":"2026-01-22T10:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.866112 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.866155 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.866171 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.866193 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.866210 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:51Z","lastTransitionTime":"2026-01-22T10:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.969123 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.969178 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.969195 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.969217 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:51 crc kubenswrapper[4752]: I0122 10:25:51.969234 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:51Z","lastTransitionTime":"2026-01-22T10:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.057975 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 09:51:06.286475936 +0000 UTC Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.071482 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.071523 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.071532 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.071545 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.071554 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:52Z","lastTransitionTime":"2026-01-22T10:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.174367 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.174398 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.174407 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.174422 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.174431 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:52Z","lastTransitionTime":"2026-01-22T10:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.278293 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.278337 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.278349 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.278369 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.278384 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:52Z","lastTransitionTime":"2026-01-22T10:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.318121 4752 generic.go:334] "Generic (PLEG): container finished" podID="8271e9d0-84de-47c5-82bb-35fd1af29e23" containerID="66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff" exitCode=0 Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.318192 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" event={"ID":"8271e9d0-84de-47c5-82bb-35fd1af29e23","Type":"ContainerDied","Data":"66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff"} Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.319069 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-6v582" event={"ID":"8db1e85a-38c2-41d7-8b2e-684b64e813be","Type":"ContainerStarted","Data":"5bec36dff2cdfff4610daf95f1806e47f0a275b67df23f36e8dd4363baf29e75"} Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.319092 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-6v582" event={"ID":"8db1e85a-38c2-41d7-8b2e-684b64e813be","Type":"ContainerStarted","Data":"92666edb85fea77e7a08fe7f71004b4e5f3a2c38e1bde2f0001a7fbeaee40ee0"} Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.330874 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.345466 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.357277 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.380471 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.380505 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.380532 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.380546 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.380554 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:52Z","lastTransitionTime":"2026-01-22T10:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.383574 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.397295 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6v582" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db1e85a-38c2-41d7-8b2e-684b64e813be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mj7kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6v582\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.418513 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.434453 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.446067 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.461476 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.474085 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d1f73a22b61b7214f6d3f60b924dd1f2c82ecafa6190f3dcdceceea8d00ece0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.482623 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.482685 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.482698 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.482715 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.482729 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:52Z","lastTransitionTime":"2026-01-22T10:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.489894 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.504792 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.518356 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.536075 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.554379 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.566254 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.583880 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.585302 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.585351 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.585363 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.585385 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.585399 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:52Z","lastTransitionTime":"2026-01-22T10:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.595315 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6v582" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db1e85a-38c2-41d7-8b2e-684b64e813be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bec36dff2cdfff4610daf95f1806e47f0a275b67df23f36e8dd4363baf29e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mj7kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6v582\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.610650 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.623735 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.641791 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.654820 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.656383 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.656454 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.656479 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.656510 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.656532 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:52Z","lastTransitionTime":"2026-01-22T10:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.668052 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d1f73a22b61b7214f6d3f60b924dd1f2c82ecafa6190f3dcdceceea8d00ece0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: E0122 10:25:52.669846 4752 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6520db-2739-481f-9d91-77c81039e25e\\\",\\\"systemUUID\\\":\\\"d71a021a-a6a8-4801-b0d5-dbfd44512a09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.673726 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.673758 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.673768 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.673782 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.673792 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:52Z","lastTransitionTime":"2026-01-22T10:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.683043 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: E0122 10:25:52.692537 4752 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6520db-2739-481f-9d91-77c81039e25e\\\",\\\"systemUUID\\\":\\\"d71a021a-a6a8-4801-b0d5-dbfd44512a09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.697126 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.697171 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.697182 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.697206 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.697217 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:52Z","lastTransitionTime":"2026-01-22T10:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.698950 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.710616 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: E0122 10:25:52.714230 4752 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6520db-2739-481f-9d91-77c81039e25e\\\",\\\"systemUUID\\\":\\\"d71a021a-a6a8-4801-b0d5-dbfd44512a09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.718055 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.718092 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.718101 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.718117 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.718127 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:52Z","lastTransitionTime":"2026-01-22T10:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:52 crc kubenswrapper[4752]: E0122 10:25:52.734916 4752 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6520db-2739-481f-9d91-77c81039e25e\\\",\\\"systemUUID\\\":\\\"d71a021a-a6a8-4801-b0d5-dbfd44512a09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.739112 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.739168 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.739186 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.739211 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.739230 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:52Z","lastTransitionTime":"2026-01-22T10:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.743286 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.755702 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: E0122 10:25:52.759442 4752 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6520db-2739-481f-9d91-77c81039e25e\\\",\\\"systemUUID\\\":\\\"d71a021a-a6a8-4801-b0d5-dbfd44512a09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: E0122 10:25:52.759661 4752 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.761354 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.761405 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.761422 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.761445 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.761461 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:52Z","lastTransitionTime":"2026-01-22T10:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.772931 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.788520 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:52Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.864714 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.864754 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.864765 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.864785 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.864798 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:52Z","lastTransitionTime":"2026-01-22T10:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.968361 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.968442 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.968460 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.968485 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:52 crc kubenswrapper[4752]: I0122 10:25:52.968503 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:52Z","lastTransitionTime":"2026-01-22T10:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.067579 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 10:15:36.78990107 +0000 UTC Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.070760 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.070802 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.070811 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.070825 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.070835 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:53Z","lastTransitionTime":"2026-01-22T10:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.097060 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.097080 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.097191 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:25:53 crc kubenswrapper[4752]: E0122 10:25:53.097566 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:25:53 crc kubenswrapper[4752]: E0122 10:25:53.097400 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:25:53 crc kubenswrapper[4752]: E0122 10:25:53.097811 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.173761 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.173807 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.173821 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.173841 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.173880 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:53Z","lastTransitionTime":"2026-01-22T10:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.276050 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.276101 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.276119 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.276144 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.276161 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:53Z","lastTransitionTime":"2026-01-22T10:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.326812 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" event={"ID":"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25","Type":"ContainerStarted","Data":"5bb5cd581900d04c2e02a19d8f4f29d664e6a1722b5ceed4ed30af5b8249f684"} Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.327126 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.332023 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" event={"ID":"8271e9d0-84de-47c5-82bb-35fd1af29e23","Type":"ContainerStarted","Data":"8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556"} Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.345583 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:53Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.357141 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.369001 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:53Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.378669 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.378733 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.378752 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.378779 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.378807 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:53Z","lastTransitionTime":"2026-01-22T10:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.397753 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb5cd581900d04c2e02a19d8f4f29d664e6a1722b5ceed4ed30af5b8249f684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:53Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.414703 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6v582" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db1e85a-38c2-41d7-8b2e-684b64e813be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bec36dff2cdfff4610daf95f1806e47f0a275b67df23f36e8dd4363baf29e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mj7kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6v582\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:53Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.435373 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:53Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.449586 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:53Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.463347 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:53Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.479320 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:53Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.481094 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.481175 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.481196 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.481221 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.481240 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:53Z","lastTransitionTime":"2026-01-22T10:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.493966 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d1f73a22b61b7214f6d3f60b924dd1f2c82ecafa6190f3dcdceceea8d00ece0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:53Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.515952 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:53Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.532745 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:53Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.545192 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:53Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.559477 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:53Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.571302 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:53Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.580326 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:53Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.583934 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.583963 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.583971 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.583984 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.583992 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:53Z","lastTransitionTime":"2026-01-22T10:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.591352 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:53Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.601803 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:53Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.617132 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:53Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.628194 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d1f73a22b61b7214f6d3f60b924dd1f2c82ecafa6190f3dcdceceea8d00ece0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:53Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.645231 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:53Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.666704 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:53Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.684398 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:53Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.685915 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.685944 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.685955 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.685971 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.685982 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:53Z","lastTransitionTime":"2026-01-22T10:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.697694 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:53Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.714458 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:53Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.727607 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:53Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.741231 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:53Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.756535 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:53Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.773004 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:53Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.788126 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.788155 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.788162 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.788177 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.788185 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:53Z","lastTransitionTime":"2026-01-22T10:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.795005 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb5cd581900d04c2e02a19d8f4f29d664e6a1722b5ceed4ed30af5b8249f684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:53Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.810522 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6v582" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db1e85a-38c2-41d7-8b2e-684b64e813be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bec36dff2cdfff4610daf95f1806e47f0a275b67df23f36e8dd4363baf29e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mj7kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6v582\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:53Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.891693 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.891808 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.891834 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.891904 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.891930 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:53Z","lastTransitionTime":"2026-01-22T10:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.996379 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.996461 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.996479 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.997146 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:53 crc kubenswrapper[4752]: I0122 10:25:53.997204 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:53Z","lastTransitionTime":"2026-01-22T10:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.068569 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 03:19:46.695854323 +0000 UTC Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.099816 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.100123 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.100261 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.100399 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.100526 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:54Z","lastTransitionTime":"2026-01-22T10:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.203605 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.203669 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.203686 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.203711 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.203727 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:54Z","lastTransitionTime":"2026-01-22T10:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.306208 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.306273 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.306286 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.306309 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.306325 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:54Z","lastTransitionTime":"2026-01-22T10:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.339697 4752 generic.go:334] "Generic (PLEG): container finished" podID="8271e9d0-84de-47c5-82bb-35fd1af29e23" containerID="8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556" exitCode=0 Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.339808 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" event={"ID":"8271e9d0-84de-47c5-82bb-35fd1af29e23","Type":"ContainerDied","Data":"8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556"} Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.340509 4752 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.340710 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.364663 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:54Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.382756 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.386734 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:54Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.409332 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.409389 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.409406 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.409429 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.409447 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:54Z","lastTransitionTime":"2026-01-22T10:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.419852 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb5cd581900d04c2e02a19d8f4f29d664e6a1722b5ceed4ed30af5b8249f684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:54Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.436306 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6v582" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db1e85a-38c2-41d7-8b2e-684b64e813be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bec36dff2cdfff4610daf95f1806e47f0a275b67df23f36e8dd4363baf29e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mj7kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6v582\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:54Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.456034 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:54Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.471919 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:54Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.485205 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:54Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.497267 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d1f73a22b61b7214f6d3f60b924dd1f2c82ecafa6190f3dcdceceea8d00ece0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:54Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.512591 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.512631 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.512641 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.512661 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.512673 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:54Z","lastTransitionTime":"2026-01-22T10:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.513526 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:54Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.535658 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:54Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.548659 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:54Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.560979 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:54Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.573995 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:54Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.587100 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:54Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.598941 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:54Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.611481 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:54Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.614833 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.614875 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.614887 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.614901 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.614912 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:54Z","lastTransitionTime":"2026-01-22T10:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.622169 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:54Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.633496 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:54Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.645393 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:54Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.655681 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d1f73a22b61b7214f6d3f60b924dd1f2c82ecafa6190f3dcdceceea8d00ece0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:54Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.677738 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:54Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.691489 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:54Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.703459 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:54Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.716979 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:54Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.718762 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.718804 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.718815 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.718837 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.718868 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:54Z","lastTransitionTime":"2026-01-22T10:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.749888 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:54Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.788361 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:54Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.820551 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.820583 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.820591 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.820606 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.820616 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:54Z","lastTransitionTime":"2026-01-22T10:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.828544 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:25:54 crc kubenswrapper[4752]: E0122 10:25:54.828646 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:10.828625923 +0000 UTC m=+50.058568831 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.828764 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:25:54 crc kubenswrapper[4752]: E0122 10:25:54.828944 4752 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 10:25:54 crc kubenswrapper[4752]: E0122 10:25:54.828996 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 10:26:10.828984902 +0000 UTC m=+50.058927810 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.829972 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:54Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.869101 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:54Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.911381 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb5cd581900d04c2e02a19d8f4f29d664e6a1722b5ceed4ed30af5b8249f684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:54Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.922532 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.922569 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.922580 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.922595 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.922606 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:54Z","lastTransitionTime":"2026-01-22T10:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.929941 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.929986 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.930011 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:25:54 crc kubenswrapper[4752]: E0122 10:25:54.930112 4752 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 10:25:54 crc kubenswrapper[4752]: E0122 10:25:54.930124 4752 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 10:25:54 crc kubenswrapper[4752]: E0122 10:25:54.930144 4752 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 10:25:54 crc kubenswrapper[4752]: E0122 10:25:54.930155 4752 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:25:54 crc kubenswrapper[4752]: E0122 10:25:54.930165 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 10:26:10.930148083 +0000 UTC m=+50.160090991 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 10:25:54 crc kubenswrapper[4752]: E0122 10:25:54.930185 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 10:26:10.930175784 +0000 UTC m=+50.160118692 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:25:54 crc kubenswrapper[4752]: E0122 10:25:54.930204 4752 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 10:25:54 crc kubenswrapper[4752]: E0122 10:25:54.930259 4752 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 10:25:54 crc kubenswrapper[4752]: E0122 10:25:54.930280 4752 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:25:54 crc kubenswrapper[4752]: E0122 10:25:54.930358 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 10:26:10.930331488 +0000 UTC m=+50.160274436 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:25:54 crc kubenswrapper[4752]: I0122 10:25:54.949789 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6v582" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db1e85a-38c2-41d7-8b2e-684b64e813be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bec36dff2cdfff4610daf95f1806e47f0a275b67df23f36e8dd4363baf29e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mj7kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6v582\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:54Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.026211 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.026282 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.026302 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.026327 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.026345 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:55Z","lastTransitionTime":"2026-01-22T10:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.069627 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 00:30:06.466046952 +0000 UTC Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.097487 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.097555 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.097498 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:25:55 crc kubenswrapper[4752]: E0122 10:25:55.097718 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:25:55 crc kubenswrapper[4752]: E0122 10:25:55.097984 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:25:55 crc kubenswrapper[4752]: E0122 10:25:55.098150 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.129686 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.129761 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.129786 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.129818 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.129844 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:55Z","lastTransitionTime":"2026-01-22T10:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.232336 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.232399 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.232418 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.232445 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.232462 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:55Z","lastTransitionTime":"2026-01-22T10:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.351408 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.351880 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.352005 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.352036 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.352100 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:55Z","lastTransitionTime":"2026-01-22T10:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.361954 4752 generic.go:334] "Generic (PLEG): container finished" podID="8271e9d0-84de-47c5-82bb-35fd1af29e23" containerID="b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012" exitCode=0 Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.362027 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" event={"ID":"8271e9d0-84de-47c5-82bb-35fd1af29e23","Type":"ContainerDied","Data":"b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012"} Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.362098 4752 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.389920 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:55Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.410091 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:55Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.425578 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:55Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.437690 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d1f73a22b61b7214f6d3f60b924dd1f2c82ecafa6190f3dcdceceea8d00ece0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:55Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.456078 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.456119 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.456128 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.456143 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.456161 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:55Z","lastTransitionTime":"2026-01-22T10:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.458187 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:55Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.474923 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:55Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.489565 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:55Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.501541 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:55Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.515173 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:55Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.527117 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:55Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.541058 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:55Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.558430 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.558476 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.558488 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.558504 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.558515 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:55Z","lastTransitionTime":"2026-01-22T10:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.563509 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:55Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.574751 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:55Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.591340 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb5cd581900d04c2e02a19d8f4f29d664e6a1722b5ceed4ed30af5b8249f684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:55Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.601195 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6v582" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db1e85a-38c2-41d7-8b2e-684b64e813be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bec36dff2cdfff4610daf95f1806e47f0a275b67df23f36e8dd4363baf29e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mj7kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6v582\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:55Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.661666 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.661759 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.661780 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.661805 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.661825 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:55Z","lastTransitionTime":"2026-01-22T10:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.764558 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.764625 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.764643 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.764669 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.764689 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:55Z","lastTransitionTime":"2026-01-22T10:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.868002 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.868063 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.868080 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.868108 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.868125 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:55Z","lastTransitionTime":"2026-01-22T10:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.971382 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.971428 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.971441 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.971459 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:55 crc kubenswrapper[4752]: I0122 10:25:55.971468 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:55Z","lastTransitionTime":"2026-01-22T10:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.070692 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 06:33:54.960393982 +0000 UTC Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.074282 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.074331 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.074347 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.074369 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.074388 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:56Z","lastTransitionTime":"2026-01-22T10:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.177445 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.177522 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.177546 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.177583 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.177610 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:56Z","lastTransitionTime":"2026-01-22T10:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.283403 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.283831 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.283893 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.283927 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.283950 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:56Z","lastTransitionTime":"2026-01-22T10:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.372434 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" event={"ID":"8271e9d0-84de-47c5-82bb-35fd1af29e23","Type":"ContainerStarted","Data":"198c87bacd03bbc21825aea034abc4d990d941e5869fd32afc17a944185aac93"} Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.372495 4752 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.386796 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.386835 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.386846 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.386893 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.386907 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:56Z","lastTransitionTime":"2026-01-22T10:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.394750 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:56Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.424938 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:56Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.439622 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:56Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.457159 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:56Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.489838 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.489920 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.489938 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.489961 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.489978 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:56Z","lastTransitionTime":"2026-01-22T10:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.490996 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:56Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.523121 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb5cd581900d04c2e02a19d8f4f29d664e6a1722b5ceed4ed30af5b8249f684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:56Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.545118 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6v582" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db1e85a-38c2-41d7-8b2e-684b64e813be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bec36dff2cdfff4610daf95f1806e47f0a275b67df23f36e8dd4363baf29e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mj7kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6v582\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:56Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.569669 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:56Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.592267 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.592311 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.592319 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.592335 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.592345 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:56Z","lastTransitionTime":"2026-01-22T10:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.594543 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:56Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.612751 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:56Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.627703 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d1f73a22b61b7214f6d3f60b924dd1f2c82ecafa6190f3dcdceceea8d00ece0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:56Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.643323 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://198c87bacd03bbc21825aea034abc4d990d941e5869fd32afc17a944185aac93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:56Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.667544 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:56Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.680588 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:56Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.694740 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:56Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.694989 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.695008 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.695016 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.695030 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.695041 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:56Z","lastTransitionTime":"2026-01-22T10:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.797816 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.797945 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.797978 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.798003 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.798033 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:56Z","lastTransitionTime":"2026-01-22T10:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.900782 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.900824 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.900836 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.900873 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:56 crc kubenswrapper[4752]: I0122 10:25:56.900888 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:56Z","lastTransitionTime":"2026-01-22T10:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.003635 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.003684 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.003696 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.003723 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.003735 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:57Z","lastTransitionTime":"2026-01-22T10:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.071287 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 21:16:38.074558768 +0000 UTC Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.096988 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.097051 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:25:57 crc kubenswrapper[4752]: E0122 10:25:57.097203 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.097640 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:25:57 crc kubenswrapper[4752]: E0122 10:25:57.097770 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:25:57 crc kubenswrapper[4752]: E0122 10:25:57.097930 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.105606 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.105661 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.105678 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.105708 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.105726 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:57Z","lastTransitionTime":"2026-01-22T10:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.208733 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.208797 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.208820 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.208848 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.208901 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:57Z","lastTransitionTime":"2026-01-22T10:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.310819 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.310917 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.310932 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.310947 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.310960 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:57Z","lastTransitionTime":"2026-01-22T10:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.379235 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-784rk_bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25/ovnkube-controller/0.log" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.384412 4752 generic.go:334] "Generic (PLEG): container finished" podID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerID="5bb5cd581900d04c2e02a19d8f4f29d664e6a1722b5ceed4ed30af5b8249f684" exitCode=1 Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.384467 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" event={"ID":"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25","Type":"ContainerDied","Data":"5bb5cd581900d04c2e02a19d8f4f29d664e6a1722b5ceed4ed30af5b8249f684"} Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.385819 4752 scope.go:117] "RemoveContainer" containerID="5bb5cd581900d04c2e02a19d8f4f29d664e6a1722b5ceed4ed30af5b8249f684" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.411389 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:57Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.413529 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.413579 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.413596 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.413618 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.413640 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:57Z","lastTransitionTime":"2026-01-22T10:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.433568 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:57Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.446066 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:57Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.466955 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:57Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.489482 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:57Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.516501 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.516554 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.516585 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.516606 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.516620 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:57Z","lastTransitionTime":"2026-01-22T10:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.523678 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bb5cd581900d04c2e02a19d8f4f29d664e6a1722b5ceed4ed30af5b8249f684\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bb5cd581900d04c2e02a19d8f4f29d664e6a1722b5ceed4ed30af5b8249f684\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"message\\\":\\\"troller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 10:25:56.427258 6017 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 10:25:56.427740 6017 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 10:25:56.427768 6017 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0122 10:25:56.427831 6017 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 10:25:56.427868 6017 factory.go:656] Stopping watch factory\\\\nI0122 10:25:56.427275 6017 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:56.427890 6017 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 10:25:56.427900 6017 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 10:25:56.427851 6017 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 10:25:56.427982 6017 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 10:25:56.427302 6017 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:56.427331 6017 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:56.427373 6017 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:57Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.547821 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6v582" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db1e85a-38c2-41d7-8b2e-684b64e813be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bec36dff2cdfff4610daf95f1806e47f0a275b67df23f36e8dd4363baf29e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mj7kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6v582\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:57Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.565959 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:57Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.581940 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:57Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.598098 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:57Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.614203 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d1f73a22b61b7214f6d3f60b924dd1f2c82ecafa6190f3dcdceceea8d00ece0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:57Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.617991 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.618024 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.618034 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.618050 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.618060 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:57Z","lastTransitionTime":"2026-01-22T10:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.634587 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://198c87bacd03bbc21825aea034abc4d990d941e5869fd32afc17a944185aac93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:57Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.657067 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:57Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.671030 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:57Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.686586 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:57Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.720449 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.720481 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.720490 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.720502 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.720510 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:57Z","lastTransitionTime":"2026-01-22T10:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.823165 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.823205 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.823213 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.823229 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.823237 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:57Z","lastTransitionTime":"2026-01-22T10:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.925539 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.925587 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.925598 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.925615 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:57 crc kubenswrapper[4752]: I0122 10:25:57.925626 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:57Z","lastTransitionTime":"2026-01-22T10:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.028760 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.028829 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.028847 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.028900 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.028921 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:58Z","lastTransitionTime":"2026-01-22T10:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.072009 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 18:25:23.525488823 +0000 UTC Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.131831 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.131886 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.131896 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.131910 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.131921 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:58Z","lastTransitionTime":"2026-01-22T10:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.234636 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.234683 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.234696 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.234712 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.234724 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:58Z","lastTransitionTime":"2026-01-22T10:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.336892 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.336952 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.336964 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.336987 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.337003 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:58Z","lastTransitionTime":"2026-01-22T10:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.391835 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-784rk_bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25/ovnkube-controller/0.log" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.395790 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" event={"ID":"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25","Type":"ContainerStarted","Data":"946136c6c974ec251484c9443e50ec1b6c020abf734d1f697953af7803db3f5d"} Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.395927 4752 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.439164 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.439214 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.439231 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.439253 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.439270 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:58Z","lastTransitionTime":"2026-01-22T10:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.440305 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:58Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.460053 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:58Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.472884 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:58Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.487061 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:58Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.501438 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:58Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.512492 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:58Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.530253 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:58Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.541416 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.541451 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.541462 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.541480 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.541493 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:58Z","lastTransitionTime":"2026-01-22T10:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.546434 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:58Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.564234 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946136c6c974ec251484c9443e50ec1b6c020abf734d1f697953af7803db3f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bb5cd581900d04c2e02a19d8f4f29d664e6a1722b5ceed4ed30af5b8249f684\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"message\\\":\\\"troller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 10:25:56.427258 6017 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 10:25:56.427740 6017 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 10:25:56.427768 6017 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0122 10:25:56.427831 6017 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 10:25:56.427868 6017 factory.go:656] Stopping watch factory\\\\nI0122 10:25:56.427275 6017 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:56.427890 6017 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 10:25:56.427900 6017 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 10:25:56.427851 6017 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 10:25:56.427982 6017 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 10:25:56.427302 6017 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:56.427331 6017 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:56.427373 6017 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:58Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.575851 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6v582" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db1e85a-38c2-41d7-8b2e-684b64e813be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bec36dff2cdfff4610daf95f1806e47f0a275b67df23f36e8dd4363baf29e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mj7kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6v582\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:58Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.591933 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://198c87bacd03bbc21825aea034abc4d990d941e5869fd32afc17a944185aac93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:58Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.604668 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:58Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.618631 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:58Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.636630 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:58Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.643503 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.643545 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.643561 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.643580 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.643593 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:58Z","lastTransitionTime":"2026-01-22T10:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.650226 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d1f73a22b61b7214f6d3f60b924dd1f2c82ecafa6190f3dcdceceea8d00ece0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:58Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.746692 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.746767 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.746779 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.746798 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.746811 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:58Z","lastTransitionTime":"2026-01-22T10:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.848834 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.848898 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.848909 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.848925 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.848936 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:58Z","lastTransitionTime":"2026-01-22T10:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.951425 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.951460 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.951468 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.951479 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:58 crc kubenswrapper[4752]: I0122 10:25:58.951487 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:58Z","lastTransitionTime":"2026-01-22T10:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.072396 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 11:40:07.582620732 +0000 UTC Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.097828 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.097906 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:25:59 crc kubenswrapper[4752]: E0122 10:25:59.098002 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.098017 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:25:59 crc kubenswrapper[4752]: E0122 10:25:59.098137 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:25:59 crc kubenswrapper[4752]: E0122 10:25:59.098242 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.153764 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.153820 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.153849 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.153897 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.153913 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:59Z","lastTransitionTime":"2026-01-22T10:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.247479 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx"] Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.247991 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.251736 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.252533 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.256576 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.256637 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.256653 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.256675 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.256692 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:59Z","lastTransitionTime":"2026-01-22T10:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.269377 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.284584 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.302953 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.318392 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.332346 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.347546 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.359404 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.359457 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.359473 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.359494 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.359510 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:59Z","lastTransitionTime":"2026-01-22T10:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.362128 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.374487 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.401989 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946136c6c974ec251484c9443e50ec1b6c020abf734d1f697953af7803db3f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bb5cd581900d04c2e02a19d8f4f29d664e6a1722b5ceed4ed30af5b8249f684\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"message\\\":\\\"troller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 10:25:56.427258 6017 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 10:25:56.427740 6017 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 10:25:56.427768 6017 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0122 10:25:56.427831 6017 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 10:25:56.427868 6017 factory.go:656] Stopping watch factory\\\\nI0122 10:25:56.427275 6017 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:56.427890 6017 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 10:25:56.427900 6017 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 10:25:56.427851 6017 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 10:25:56.427982 6017 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 10:25:56.427302 6017 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:56.427331 6017 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:56.427373 6017 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.402841 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ea20a00a-de56-4ce1-b008-6ebe5ba07354-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nxjjx\" (UID: \"ea20a00a-de56-4ce1-b008-6ebe5ba07354\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.402928 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sscj\" (UniqueName: \"kubernetes.io/projected/ea20a00a-de56-4ce1-b008-6ebe5ba07354-kube-api-access-4sscj\") pod \"ovnkube-control-plane-749d76644c-nxjjx\" (UID: \"ea20a00a-de56-4ce1-b008-6ebe5ba07354\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.402972 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ea20a00a-de56-4ce1-b008-6ebe5ba07354-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nxjjx\" (UID: \"ea20a00a-de56-4ce1-b008-6ebe5ba07354\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.403024 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ea20a00a-de56-4ce1-b008-6ebe5ba07354-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nxjjx\" (UID: \"ea20a00a-de56-4ce1-b008-6ebe5ba07354\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.416892 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6v582" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db1e85a-38c2-41d7-8b2e-684b64e813be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bec36dff2cdfff4610daf95f1806e47f0a275b67df23f36e8dd4363baf29e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mj7kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6v582\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.428482 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.439755 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.455647 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.461951 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.461999 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.462015 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.462034 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.462051 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:59Z","lastTransitionTime":"2026-01-22T10:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.469090 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d1f73a22b61b7214f6d3f60b924dd1f2c82ecafa6190f3dcdceceea8d00ece0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.486991 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://198c87bacd03bbc21825aea034abc4d990d941e5869fd32afc17a944185aac93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.504012 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ea20a00a-de56-4ce1-b008-6ebe5ba07354-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nxjjx\" (UID: \"ea20a00a-de56-4ce1-b008-6ebe5ba07354\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.504143 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sscj\" (UniqueName: \"kubernetes.io/projected/ea20a00a-de56-4ce1-b008-6ebe5ba07354-kube-api-access-4sscj\") pod \"ovnkube-control-plane-749d76644c-nxjjx\" (UID: \"ea20a00a-de56-4ce1-b008-6ebe5ba07354\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.504198 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ea20a00a-de56-4ce1-b008-6ebe5ba07354-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nxjjx\" (UID: \"ea20a00a-de56-4ce1-b008-6ebe5ba07354\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.504272 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ea20a00a-de56-4ce1-b008-6ebe5ba07354-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nxjjx\" (UID: \"ea20a00a-de56-4ce1-b008-6ebe5ba07354\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.505234 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ea20a00a-de56-4ce1-b008-6ebe5ba07354-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nxjjx\" (UID: \"ea20a00a-de56-4ce1-b008-6ebe5ba07354\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.505415 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ea20a00a-de56-4ce1-b008-6ebe5ba07354-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nxjjx\" (UID: \"ea20a00a-de56-4ce1-b008-6ebe5ba07354\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.506620 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea20a00a-de56-4ce1-b008-6ebe5ba07354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4sscj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4sscj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:25:59Z is after 2025-08-24T17:21:41Z" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.512813 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ea20a00a-de56-4ce1-b008-6ebe5ba07354-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nxjjx\" (UID: \"ea20a00a-de56-4ce1-b008-6ebe5ba07354\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.533288 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sscj\" (UniqueName: \"kubernetes.io/projected/ea20a00a-de56-4ce1-b008-6ebe5ba07354-kube-api-access-4sscj\") pod \"ovnkube-control-plane-749d76644c-nxjjx\" (UID: \"ea20a00a-de56-4ce1-b008-6ebe5ba07354\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.564030 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.564055 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.564062 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.564074 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.564082 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:59Z","lastTransitionTime":"2026-01-22T10:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.568745 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx" Jan 22 10:25:59 crc kubenswrapper[4752]: W0122 10:25:59.590191 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea20a00a_de56_4ce1_b008_6ebe5ba07354.slice/crio-f73747b617a01d44f2dc589456d9a4e28ee57bf932a938cf1dfd07ba6cfc9238 WatchSource:0}: Error finding container f73747b617a01d44f2dc589456d9a4e28ee57bf932a938cf1dfd07ba6cfc9238: Status 404 returned error can't find the container with id f73747b617a01d44f2dc589456d9a4e28ee57bf932a938cf1dfd07ba6cfc9238 Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.666847 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.666938 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.666959 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.666977 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.666989 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:59Z","lastTransitionTime":"2026-01-22T10:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.770445 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.770512 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.770536 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.770563 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.770582 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:59Z","lastTransitionTime":"2026-01-22T10:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.873298 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.873750 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.873770 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.873796 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.873813 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:59Z","lastTransitionTime":"2026-01-22T10:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.976847 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.976933 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.976948 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.976970 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:25:59 crc kubenswrapper[4752]: I0122 10:25:59.976985 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:25:59Z","lastTransitionTime":"2026-01-22T10:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.072974 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 17:32:34.510571683 +0000 UTC Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.080585 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.080627 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.080638 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.080656 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.080669 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:00Z","lastTransitionTime":"2026-01-22T10:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.183402 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.183490 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.183511 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.183543 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.183561 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:00Z","lastTransitionTime":"2026-01-22T10:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.286737 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.286800 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.286814 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.286838 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.286894 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:00Z","lastTransitionTime":"2026-01-22T10:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.389662 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.389714 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.389732 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.389755 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.389775 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:00Z","lastTransitionTime":"2026-01-22T10:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.405788 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx" event={"ID":"ea20a00a-de56-4ce1-b008-6ebe5ba07354","Type":"ContainerStarted","Data":"da111a1634107b11f9dc8909d6e65cd6ccaa586d556262db6f9e72b594cdb511"} Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.405886 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx" event={"ID":"ea20a00a-de56-4ce1-b008-6ebe5ba07354","Type":"ContainerStarted","Data":"c8c8f23b87fd82babb5fe7aabecbe891c628609aadb4ca645bcdfc5d60593e2f"} Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.405911 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx" event={"ID":"ea20a00a-de56-4ce1-b008-6ebe5ba07354","Type":"ContainerStarted","Data":"f73747b617a01d44f2dc589456d9a4e28ee57bf932a938cf1dfd07ba6cfc9238"} Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.408735 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-784rk_bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25/ovnkube-controller/1.log" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.409491 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-784rk_bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25/ovnkube-controller/0.log" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.414532 4752 generic.go:334] "Generic (PLEG): container finished" podID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerID="946136c6c974ec251484c9443e50ec1b6c020abf734d1f697953af7803db3f5d" exitCode=1 Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.414563 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" event={"ID":"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25","Type":"ContainerDied","Data":"946136c6c974ec251484c9443e50ec1b6c020abf734d1f697953af7803db3f5d"} Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.414635 4752 scope.go:117] "RemoveContainer" containerID="5bb5cd581900d04c2e02a19d8f4f29d664e6a1722b5ceed4ed30af5b8249f684" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.415453 4752 scope.go:117] "RemoveContainer" containerID="946136c6c974ec251484c9443e50ec1b6c020abf734d1f697953af7803db3f5d" Jan 22 10:26:00 crc kubenswrapper[4752]: E0122 10:26:00.415637 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-784rk_openshift-ovn-kubernetes(bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25)\"" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.423764 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.443325 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.460819 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.484822 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946136c6c974ec251484c9443e50ec1b6c020abf734d1f697953af7803db3f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bb5cd581900d04c2e02a19d8f4f29d664e6a1722b5ceed4ed30af5b8249f684\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"message\\\":\\\"troller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 10:25:56.427258 6017 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 10:25:56.427740 6017 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 10:25:56.427768 6017 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0122 10:25:56.427831 6017 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 10:25:56.427868 6017 factory.go:656] Stopping watch factory\\\\nI0122 10:25:56.427275 6017 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:56.427890 6017 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 10:25:56.427900 6017 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 10:25:56.427851 6017 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 10:25:56.427982 6017 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 10:25:56.427302 6017 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:56.427331 6017 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:56.427373 6017 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.493467 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.493519 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.493531 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.493549 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.493561 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:00Z","lastTransitionTime":"2026-01-22T10:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.499401 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6v582" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db1e85a-38c2-41d7-8b2e-684b64e813be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bec36dff2cdfff4610daf95f1806e47f0a275b67df23f36e8dd4363baf29e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mj7kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6v582\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.518939 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.537467 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.551649 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.568006 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.585970 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d1f73a22b61b7214f6d3f60b924dd1f2c82ecafa6190f3dcdceceea8d00ece0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.596309 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.596351 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.596364 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.596385 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.596401 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:00Z","lastTransitionTime":"2026-01-22T10:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.609328 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://198c87bacd03bbc21825aea034abc4d990d941e5869fd32afc17a944185aac93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.626483 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea20a00a-de56-4ce1-b008-6ebe5ba07354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c8f23b87fd82babb5fe7aabecbe891c628609aadb4ca645bcdfc5d60593e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4sscj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da111a1634107b11f9dc8909d6e65cd6ccaa586d556262db6f9e72b594cdb511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4sscj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.644081 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.660473 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.679653 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.699517 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.699566 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.699577 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.699595 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.699608 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:00Z","lastTransitionTime":"2026-01-22T10:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.704356 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.718611 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.736877 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.753946 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.772695 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-69crw"] Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.773432 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:00 crc kubenswrapper[4752]: E0122 10:26:00.773522 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69crw" podUID="6bbb033b-8d31-4200-b77f-4910b5170085" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.785716 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946136c6c974ec251484c9443e50ec1b6c020abf734d1f697953af7803db3f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bb5cd581900d04c2e02a19d8f4f29d664e6a1722b5ceed4ed30af5b8249f684\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"message\\\":\\\"troller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 10:25:56.427258 6017 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 10:25:56.427740 6017 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 10:25:56.427768 6017 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0122 10:25:56.427831 6017 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 10:25:56.427868 6017 factory.go:656] Stopping watch factory\\\\nI0122 10:25:56.427275 6017 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:56.427890 6017 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 10:25:56.427900 6017 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 10:25:56.427851 6017 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 10:25:56.427982 6017 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 10:25:56.427302 6017 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:56.427331 6017 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:56.427373 6017 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946136c6c974ec251484c9443e50ec1b6c020abf734d1f697953af7803db3f5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:25:59Z\\\",\\\"message\\\":\\\"25:58.585132 6208 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585188 6208 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585242 6208 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585290 6208 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585338 6208 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585755 6208 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.586946 6208 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 10:25:58.586968 6208 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 10:25:58.586990 6208 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 10:25:58.587017 6208 factory.go:656] Stopping watch factory\\\\nI0122 10:25:58.587033 6208 ovnkube.go:599] Stopped ovnkube\\\\nI0122 10:25:58.587055 6208 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 10:25:58.587065 6208 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 10:25:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.799986 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6v582" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db1e85a-38c2-41d7-8b2e-684b64e813be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bec36dff2cdfff4610daf95f1806e47f0a275b67df23f36e8dd4363baf29e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mj7kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6v582\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.801665 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.801745 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.801771 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.801808 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.801833 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:00Z","lastTransitionTime":"2026-01-22T10:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.821478 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.843141 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.861165 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.880562 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.896808 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d1f73a22b61b7214f6d3f60b924dd1f2c82ecafa6190f3dcdceceea8d00ece0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.904608 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.904659 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.904679 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.904704 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.904723 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:00Z","lastTransitionTime":"2026-01-22T10:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.917270 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://198c87bacd03bbc21825aea034abc4d990d941e5869fd32afc17a944185aac93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.919803 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6bbb033b-8d31-4200-b77f-4910b5170085-metrics-certs\") pod \"network-metrics-daemon-69crw\" (UID: \"6bbb033b-8d31-4200-b77f-4910b5170085\") " pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.919931 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmmq5\" (UniqueName: \"kubernetes.io/projected/6bbb033b-8d31-4200-b77f-4910b5170085-kube-api-access-pmmq5\") pod \"network-metrics-daemon-69crw\" (UID: \"6bbb033b-8d31-4200-b77f-4910b5170085\") " pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.935439 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea20a00a-de56-4ce1-b008-6ebe5ba07354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c8f23b87fd82babb5fe7aabecbe891c628609aadb4ca645bcdfc5d60593e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4sscj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da111a1634107b11f9dc8909d6e65cd6ccaa586d556262db6f9e72b594cdb511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4sscj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.950387 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.971444 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:00 crc kubenswrapper[4752]: I0122 10:26:00.990472 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:00Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.008178 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.008277 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.008299 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.008325 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.008345 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:01Z","lastTransitionTime":"2026-01-22T10:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.016034 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.020905 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6bbb033b-8d31-4200-b77f-4910b5170085-metrics-certs\") pod \"network-metrics-daemon-69crw\" (UID: \"6bbb033b-8d31-4200-b77f-4910b5170085\") " pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.020967 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmmq5\" (UniqueName: \"kubernetes.io/projected/6bbb033b-8d31-4200-b77f-4910b5170085-kube-api-access-pmmq5\") pod \"network-metrics-daemon-69crw\" (UID: \"6bbb033b-8d31-4200-b77f-4910b5170085\") " pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:01 crc kubenswrapper[4752]: E0122 10:26:01.021114 4752 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 10:26:01 crc kubenswrapper[4752]: E0122 10:26:01.021210 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bbb033b-8d31-4200-b77f-4910b5170085-metrics-certs podName:6bbb033b-8d31-4200-b77f-4910b5170085 nodeName:}" failed. No retries permitted until 2026-01-22 10:26:01.521184954 +0000 UTC m=+40.751127902 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6bbb033b-8d31-4200-b77f-4910b5170085-metrics-certs") pod "network-metrics-daemon-69crw" (UID: "6bbb033b-8d31-4200-b77f-4910b5170085") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.032444 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.044326 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmmq5\" (UniqueName: \"kubernetes.io/projected/6bbb033b-8d31-4200-b77f-4910b5170085-kube-api-access-pmmq5\") pod \"network-metrics-daemon-69crw\" (UID: \"6bbb033b-8d31-4200-b77f-4910b5170085\") " pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.047507 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d1f73a22b61b7214f6d3f60b924dd1f2c82ecafa6190f3dcdceceea8d00ece0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.063626 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://198c87bacd03bbc21825aea034abc4d990d941e5869fd32afc17a944185aac93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.073376 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 22:27:11.546058639 +0000 UTC Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.076944 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea20a00a-de56-4ce1-b008-6ebe5ba07354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c8f23b87fd82babb5fe7aabecbe891c628609aadb4ca645bcdfc5d60593e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4sscj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da111a1634107b11f9dc8909d6e65cd6ccaa586d556262db6f9e72b594cdb511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4sscj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.088896 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.097547 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.097587 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:26:01 crc kubenswrapper[4752]: E0122 10:26:01.097704 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.097764 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:26:01 crc kubenswrapper[4752]: E0122 10:26:01.097928 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:26:01 crc kubenswrapper[4752]: E0122 10:26:01.098034 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.102965 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.110496 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.110535 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.110551 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.110573 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.110593 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:01Z","lastTransitionTime":"2026-01-22T10:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.121533 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.139220 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-69crw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bbb033b-8d31-4200-b77f-4910b5170085\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmmq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmmq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:26:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-69crw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.199043 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.212920 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.212950 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.212958 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.212970 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.212979 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:01Z","lastTransitionTime":"2026-01-22T10:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.217757 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.232167 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.245575 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.255427 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.266385 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6v582" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db1e85a-38c2-41d7-8b2e-684b64e813be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bec36dff2cdfff4610daf95f1806e47f0a275b67df23f36e8dd4363baf29e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mj7kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6v582\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.283916 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.301340 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.315545 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.315581 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.315591 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.315608 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.315618 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:01Z","lastTransitionTime":"2026-01-22T10:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.326453 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946136c6c974ec251484c9443e50ec1b6c020abf734d1f697953af7803db3f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bb5cd581900d04c2e02a19d8f4f29d664e6a1722b5ceed4ed30af5b8249f684\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"message\\\":\\\"troller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 10:25:56.427258 6017 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 10:25:56.427740 6017 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 10:25:56.427768 6017 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0122 10:25:56.427831 6017 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 10:25:56.427868 6017 factory.go:656] Stopping watch factory\\\\nI0122 10:25:56.427275 6017 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:56.427890 6017 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 10:25:56.427900 6017 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 10:25:56.427851 6017 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 10:25:56.427982 6017 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 10:25:56.427302 6017 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:56.427331 6017 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:56.427373 6017 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946136c6c974ec251484c9443e50ec1b6c020abf734d1f697953af7803db3f5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:25:59Z\\\",\\\"message\\\":\\\"25:58.585132 6208 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585188 6208 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585242 6208 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585290 6208 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585338 6208 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585755 6208 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.586946 6208 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 10:25:58.586968 6208 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 10:25:58.586990 6208 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 10:25:58.587017 6208 factory.go:656] Stopping watch factory\\\\nI0122 10:25:58.587033 6208 ovnkube.go:599] Stopped ovnkube\\\\nI0122 10:25:58.587055 6208 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 10:25:58.587065 6208 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 10:25:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.358530 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.374304 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.389982 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.403703 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-69crw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bbb033b-8d31-4200-b77f-4910b5170085\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmmq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmmq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:26:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-69crw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.417022 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.417058 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.417069 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.417086 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.417096 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:01Z","lastTransitionTime":"2026-01-22T10:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.418643 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.419137 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-784rk_bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25/ovnkube-controller/1.log" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.431814 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.444597 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.460836 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.477662 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.500038 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946136c6c974ec251484c9443e50ec1b6c020abf734d1f697953af7803db3f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bb5cd581900d04c2e02a19d8f4f29d664e6a1722b5ceed4ed30af5b8249f684\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"message\\\":\\\"troller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 10:25:56.427258 6017 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 10:25:56.427740 6017 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 10:25:56.427768 6017 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0122 10:25:56.427831 6017 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 10:25:56.427868 6017 factory.go:656] Stopping watch factory\\\\nI0122 10:25:56.427275 6017 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:56.427890 6017 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 10:25:56.427900 6017 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 10:25:56.427851 6017 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 10:25:56.427982 6017 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 10:25:56.427302 6017 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:56.427331 6017 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:56.427373 6017 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946136c6c974ec251484c9443e50ec1b6c020abf734d1f697953af7803db3f5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:25:59Z\\\",\\\"message\\\":\\\"25:58.585132 6208 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585188 6208 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585242 6208 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585290 6208 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585338 6208 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585755 6208 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.586946 6208 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 10:25:58.586968 6208 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 10:25:58.586990 6208 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 10:25:58.587017 6208 factory.go:656] Stopping watch factory\\\\nI0122 10:25:58.587033 6208 ovnkube.go:599] Stopped ovnkube\\\\nI0122 10:25:58.587055 6208 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 10:25:58.587065 6208 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 10:25:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.512848 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6v582" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db1e85a-38c2-41d7-8b2e-684b64e813be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bec36dff2cdfff4610daf95f1806e47f0a275b67df23f36e8dd4363baf29e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mj7kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6v582\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.518725 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.518916 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.519040 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.519154 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.519266 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:01Z","lastTransitionTime":"2026-01-22T10:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.524323 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6bbb033b-8d31-4200-b77f-4910b5170085-metrics-certs\") pod \"network-metrics-daemon-69crw\" (UID: \"6bbb033b-8d31-4200-b77f-4910b5170085\") " pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:01 crc kubenswrapper[4752]: E0122 10:26:01.524432 4752 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 10:26:01 crc kubenswrapper[4752]: E0122 10:26:01.524483 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bbb033b-8d31-4200-b77f-4910b5170085-metrics-certs podName:6bbb033b-8d31-4200-b77f-4910b5170085 nodeName:}" failed. No retries permitted until 2026-01-22 10:26:02.524466001 +0000 UTC m=+41.754408909 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6bbb033b-8d31-4200-b77f-4910b5170085-metrics-certs") pod "network-metrics-daemon-69crw" (UID: "6bbb033b-8d31-4200-b77f-4910b5170085") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.527286 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea20a00a-de56-4ce1-b008-6ebe5ba07354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c8f23b87fd82babb5fe7aabecbe891c628609aadb4ca645bcdfc5d60593e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4sscj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da111a1634107b11f9dc8909d6e65cd6ccaa586d556262db6f9e72b594cdb511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4sscj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.544128 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.555325 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.568743 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.581453 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d1f73a22b61b7214f6d3f60b924dd1f2c82ecafa6190f3dcdceceea8d00ece0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.595168 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://198c87bacd03bbc21825aea034abc4d990d941e5869fd32afc17a944185aac93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:01Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.621671 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.621897 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.622050 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.622181 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.622296 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:01Z","lastTransitionTime":"2026-01-22T10:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.725509 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.725583 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.725602 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.725624 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.725642 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:01Z","lastTransitionTime":"2026-01-22T10:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.828581 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.828628 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.828645 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.828669 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.828687 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:01Z","lastTransitionTime":"2026-01-22T10:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.931726 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.931775 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.931788 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.931806 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:01 crc kubenswrapper[4752]: I0122 10:26:01.931820 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:01Z","lastTransitionTime":"2026-01-22T10:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.034812 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.034913 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.034936 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.034964 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.034986 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:02Z","lastTransitionTime":"2026-01-22T10:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.073649 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 14:33:58.695035341 +0000 UTC Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.097230 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:02 crc kubenswrapper[4752]: E0122 10:26:02.097491 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69crw" podUID="6bbb033b-8d31-4200-b77f-4910b5170085" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.138442 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.138497 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.138514 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.138540 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.138557 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:02Z","lastTransitionTime":"2026-01-22T10:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.241545 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.241631 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.241657 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.241691 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.241716 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:02Z","lastTransitionTime":"2026-01-22T10:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.345508 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.345586 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.345607 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.345633 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.345651 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:02Z","lastTransitionTime":"2026-01-22T10:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.449033 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.449169 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.449197 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.449227 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.449246 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:02Z","lastTransitionTime":"2026-01-22T10:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.534823 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6bbb033b-8d31-4200-b77f-4910b5170085-metrics-certs\") pod \"network-metrics-daemon-69crw\" (UID: \"6bbb033b-8d31-4200-b77f-4910b5170085\") " pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:02 crc kubenswrapper[4752]: E0122 10:26:02.535115 4752 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 10:26:02 crc kubenswrapper[4752]: E0122 10:26:02.535263 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bbb033b-8d31-4200-b77f-4910b5170085-metrics-certs podName:6bbb033b-8d31-4200-b77f-4910b5170085 nodeName:}" failed. No retries permitted until 2026-01-22 10:26:04.535234035 +0000 UTC m=+43.765176973 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6bbb033b-8d31-4200-b77f-4910b5170085-metrics-certs") pod "network-metrics-daemon-69crw" (UID: "6bbb033b-8d31-4200-b77f-4910b5170085") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.552954 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.553014 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.553031 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.553055 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.553073 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:02Z","lastTransitionTime":"2026-01-22T10:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.656265 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.656316 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.656330 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.656347 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.656359 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:02Z","lastTransitionTime":"2026-01-22T10:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.758938 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.758966 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.758974 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.758986 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.758994 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:02Z","lastTransitionTime":"2026-01-22T10:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.861830 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.861960 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.861984 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.862015 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.862037 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:02Z","lastTransitionTime":"2026-01-22T10:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.964697 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.964781 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.964809 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.964843 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:02 crc kubenswrapper[4752]: I0122 10:26:02.964919 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:02Z","lastTransitionTime":"2026-01-22T10:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.067363 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.067505 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.067526 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.067550 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.067568 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:03Z","lastTransitionTime":"2026-01-22T10:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.074702 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 04:41:52.763857467 +0000 UTC Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.097209 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.097279 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.097303 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:26:03 crc kubenswrapper[4752]: E0122 10:26:03.097405 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:26:03 crc kubenswrapper[4752]: E0122 10:26:03.097563 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:26:03 crc kubenswrapper[4752]: E0122 10:26:03.097679 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.161742 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.161809 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.161825 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.161844 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.161906 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:03Z","lastTransitionTime":"2026-01-22T10:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:03 crc kubenswrapper[4752]: E0122 10:26:03.183744 4752 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6520db-2739-481f-9d91-77c81039e25e\\\",\\\"systemUUID\\\":\\\"d71a021a-a6a8-4801-b0d5-dbfd44512a09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:03Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.189358 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.189407 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.189420 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.189442 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.189458 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:03Z","lastTransitionTime":"2026-01-22T10:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:03 crc kubenswrapper[4752]: E0122 10:26:03.209458 4752 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6520db-2739-481f-9d91-77c81039e25e\\\",\\\"systemUUID\\\":\\\"d71a021a-a6a8-4801-b0d5-dbfd44512a09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:03Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.214991 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.215094 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.215116 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.215179 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.215201 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:03Z","lastTransitionTime":"2026-01-22T10:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:03 crc kubenswrapper[4752]: E0122 10:26:03.236382 4752 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6520db-2739-481f-9d91-77c81039e25e\\\",\\\"systemUUID\\\":\\\"d71a021a-a6a8-4801-b0d5-dbfd44512a09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:03Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.242454 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.242506 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.242525 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.242550 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.242567 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:03Z","lastTransitionTime":"2026-01-22T10:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:03 crc kubenswrapper[4752]: E0122 10:26:03.261213 4752 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6520db-2739-481f-9d91-77c81039e25e\\\",\\\"systemUUID\\\":\\\"d71a021a-a6a8-4801-b0d5-dbfd44512a09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:03Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.266338 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.266390 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.266407 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.266430 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.266446 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:03Z","lastTransitionTime":"2026-01-22T10:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:03 crc kubenswrapper[4752]: E0122 10:26:03.285746 4752 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6520db-2739-481f-9d91-77c81039e25e\\\",\\\"systemUUID\\\":\\\"d71a021a-a6a8-4801-b0d5-dbfd44512a09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:03Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:03 crc kubenswrapper[4752]: E0122 10:26:03.285998 4752 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.288559 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.288603 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.288613 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.288629 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.288641 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:03Z","lastTransitionTime":"2026-01-22T10:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.391982 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.392062 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.392087 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.392120 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.392141 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:03Z","lastTransitionTime":"2026-01-22T10:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.495708 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.495798 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.495821 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.495887 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.495911 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:03Z","lastTransitionTime":"2026-01-22T10:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.598424 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.598462 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.598471 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.598483 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.598512 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:03Z","lastTransitionTime":"2026-01-22T10:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.701020 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.701073 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.701088 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.701109 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.701124 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:03Z","lastTransitionTime":"2026-01-22T10:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.803471 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.803528 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.803545 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.803572 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.803590 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:03Z","lastTransitionTime":"2026-01-22T10:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.906292 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.906341 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.906353 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.906369 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:03 crc kubenswrapper[4752]: I0122 10:26:03.906383 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:03Z","lastTransitionTime":"2026-01-22T10:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.008827 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.008884 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.008897 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.008913 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.008925 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:04Z","lastTransitionTime":"2026-01-22T10:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.069396 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.070248 4752 scope.go:117] "RemoveContainer" containerID="946136c6c974ec251484c9443e50ec1b6c020abf734d1f697953af7803db3f5d" Jan 22 10:26:04 crc kubenswrapper[4752]: E0122 10:26:04.070446 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-784rk_openshift-ovn-kubernetes(bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25)\"" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.074838 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 03:15:00.408824125 +0000 UTC Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.087051 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:04Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.097801 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:04 crc kubenswrapper[4752]: E0122 10:26:04.097990 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69crw" podUID="6bbb033b-8d31-4200-b77f-4910b5170085" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.105960 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:04Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.113143 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.113170 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.113178 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.113191 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.113201 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:04Z","lastTransitionTime":"2026-01-22T10:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.123641 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:04Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.141007 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6v582" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db1e85a-38c2-41d7-8b2e-684b64e813be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bec36dff2cdfff4610daf95f1806e47f0a275b67df23f36e8dd4363baf29e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mj7kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6v582\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:04Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.160464 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:04Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.175422 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:04Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.198940 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946136c6c974ec251484c9443e50ec1b6c020abf734d1f697953af7803db3f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946136c6c974ec251484c9443e50ec1b6c020abf734d1f697953af7803db3f5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:25:59Z\\\",\\\"message\\\":\\\"25:58.585132 6208 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585188 6208 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585242 6208 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585290 6208 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585338 6208 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585755 6208 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.586946 6208 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 10:25:58.586968 6208 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 10:25:58.586990 6208 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 10:25:58.587017 6208 factory.go:656] Stopping watch factory\\\\nI0122 10:25:58.587033 6208 ovnkube.go:599] Stopped ovnkube\\\\nI0122 10:25:58.587055 6208 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 10:25:58.587065 6208 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 10:25:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-784rk_openshift-ovn-kubernetes(bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:04Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.212492 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:04Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.232063 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.232142 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.232198 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.232220 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.232235 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:04Z","lastTransitionTime":"2026-01-22T10:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.244052 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d1f73a22b61b7214f6d3f60b924dd1f2c82ecafa6190f3dcdceceea8d00ece0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:04Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.262828 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://198c87bacd03bbc21825aea034abc4d990d941e5869fd32afc17a944185aac93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:04Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.275927 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea20a00a-de56-4ce1-b008-6ebe5ba07354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c8f23b87fd82babb5fe7aabecbe891c628609aadb4ca645bcdfc5d60593e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4sscj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da111a1634107b11f9dc8909d6e65cd6ccaa586d556262db6f9e72b594cdb511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4sscj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:04Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.287695 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:04Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.304470 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:04Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.319971 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:04Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.332351 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-69crw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bbb033b-8d31-4200-b77f-4910b5170085\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmmq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmmq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:26:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-69crw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:04Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.334570 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.334621 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.334634 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.334657 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.334670 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:04Z","lastTransitionTime":"2026-01-22T10:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.354993 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:04Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.368964 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:04Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.437070 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.437128 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.437145 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.437170 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.437188 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:04Z","lastTransitionTime":"2026-01-22T10:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.540675 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.540754 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.540777 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.540811 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.540833 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:04Z","lastTransitionTime":"2026-01-22T10:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.560511 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6bbb033b-8d31-4200-b77f-4910b5170085-metrics-certs\") pod \"network-metrics-daemon-69crw\" (UID: \"6bbb033b-8d31-4200-b77f-4910b5170085\") " pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:04 crc kubenswrapper[4752]: E0122 10:26:04.560698 4752 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 10:26:04 crc kubenswrapper[4752]: E0122 10:26:04.560812 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bbb033b-8d31-4200-b77f-4910b5170085-metrics-certs podName:6bbb033b-8d31-4200-b77f-4910b5170085 nodeName:}" failed. No retries permitted until 2026-01-22 10:26:08.560779004 +0000 UTC m=+47.790721952 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6bbb033b-8d31-4200-b77f-4910b5170085-metrics-certs") pod "network-metrics-daemon-69crw" (UID: "6bbb033b-8d31-4200-b77f-4910b5170085") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.645765 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.645824 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.645847 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.645905 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.645927 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:04Z","lastTransitionTime":"2026-01-22T10:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.749660 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.749712 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.749729 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.749753 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.749772 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:04Z","lastTransitionTime":"2026-01-22T10:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.852336 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.852373 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.852385 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.852401 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.852412 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:04Z","lastTransitionTime":"2026-01-22T10:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.955486 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.955533 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.955550 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.955566 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:04 crc kubenswrapper[4752]: I0122 10:26:04.955579 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:04Z","lastTransitionTime":"2026-01-22T10:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.059036 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.059102 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.059126 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.059154 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.059176 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:05Z","lastTransitionTime":"2026-01-22T10:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.074981 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 10:25:02.967646583 +0000 UTC Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.097410 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.097509 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.097549 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:26:05 crc kubenswrapper[4752]: E0122 10:26:05.097684 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:26:05 crc kubenswrapper[4752]: E0122 10:26:05.097899 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:26:05 crc kubenswrapper[4752]: E0122 10:26:05.098026 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.161729 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.161804 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.161837 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.161913 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.161950 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:05Z","lastTransitionTime":"2026-01-22T10:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.265729 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.265785 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.265800 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.265824 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.265840 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:05Z","lastTransitionTime":"2026-01-22T10:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.369048 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.369953 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.370018 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.370052 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.370074 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:05Z","lastTransitionTime":"2026-01-22T10:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.472304 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.472357 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.472372 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.472394 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.472411 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:05Z","lastTransitionTime":"2026-01-22T10:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.580937 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.580974 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.580981 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.580993 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.581001 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:05Z","lastTransitionTime":"2026-01-22T10:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.684002 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.684053 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.684070 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.684092 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.684110 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:05Z","lastTransitionTime":"2026-01-22T10:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.787776 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.787818 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.787826 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.787841 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.787872 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:05Z","lastTransitionTime":"2026-01-22T10:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.890756 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.890836 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.890886 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.890919 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.890942 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:05Z","lastTransitionTime":"2026-01-22T10:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.993537 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.993779 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.993787 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.993800 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:05 crc kubenswrapper[4752]: I0122 10:26:05.993809 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:05Z","lastTransitionTime":"2026-01-22T10:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.075521 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 21:29:44.413907155 +0000 UTC Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.096900 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.096938 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.096948 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.096960 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.096970 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:06Z","lastTransitionTime":"2026-01-22T10:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.096996 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:06 crc kubenswrapper[4752]: E0122 10:26:06.097145 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69crw" podUID="6bbb033b-8d31-4200-b77f-4910b5170085" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.199067 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.199123 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.199139 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.199162 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.199181 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:06Z","lastTransitionTime":"2026-01-22T10:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.302246 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.302344 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.302358 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.302377 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.302391 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:06Z","lastTransitionTime":"2026-01-22T10:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.405724 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.405785 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.405802 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.405825 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.405843 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:06Z","lastTransitionTime":"2026-01-22T10:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.508969 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.509334 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.509476 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.509637 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.509784 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:06Z","lastTransitionTime":"2026-01-22T10:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.613335 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.613381 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.613393 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.613413 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.613425 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:06Z","lastTransitionTime":"2026-01-22T10:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.717514 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.717613 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.717633 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.718322 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.718406 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:06Z","lastTransitionTime":"2026-01-22T10:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.821587 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.821665 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.821689 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.821721 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.821743 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:06Z","lastTransitionTime":"2026-01-22T10:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.924960 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.925025 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.925046 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.925077 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:06 crc kubenswrapper[4752]: I0122 10:26:06.925099 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:06Z","lastTransitionTime":"2026-01-22T10:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.028772 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.028822 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.028838 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.028907 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.028931 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:07Z","lastTransitionTime":"2026-01-22T10:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.075839 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 08:04:47.185090235 +0000 UTC Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.098064 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:26:07 crc kubenswrapper[4752]: E0122 10:26:07.098190 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.098610 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:26:07 crc kubenswrapper[4752]: E0122 10:26:07.098686 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.098823 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:26:07 crc kubenswrapper[4752]: E0122 10:26:07.098925 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.131603 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.131637 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.131645 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.131657 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.131683 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:07Z","lastTransitionTime":"2026-01-22T10:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.234644 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.234754 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.234774 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.234800 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.234845 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:07Z","lastTransitionTime":"2026-01-22T10:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.340074 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.340154 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.340177 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.340205 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.340222 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:07Z","lastTransitionTime":"2026-01-22T10:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.443626 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.443693 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.443715 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.443745 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.443767 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:07Z","lastTransitionTime":"2026-01-22T10:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.546744 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.546791 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.546801 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.546820 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.546833 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:07Z","lastTransitionTime":"2026-01-22T10:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.650394 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.650463 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.650481 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.650505 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.650524 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:07Z","lastTransitionTime":"2026-01-22T10:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.753340 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.753419 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.753437 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.753462 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.753480 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:07Z","lastTransitionTime":"2026-01-22T10:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.856732 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.856787 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.856803 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.856838 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.856909 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:07Z","lastTransitionTime":"2026-01-22T10:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.960118 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.960190 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.960215 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.960244 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:07 crc kubenswrapper[4752]: I0122 10:26:07.960265 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:07Z","lastTransitionTime":"2026-01-22T10:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.063405 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.063472 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.063491 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.063515 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.063534 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:08Z","lastTransitionTime":"2026-01-22T10:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.076254 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 17:41:20.765428924 +0000 UTC Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.097017 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:08 crc kubenswrapper[4752]: E0122 10:26:08.097185 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69crw" podUID="6bbb033b-8d31-4200-b77f-4910b5170085" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.167919 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.167997 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.168037 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.168073 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.168096 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:08Z","lastTransitionTime":"2026-01-22T10:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.271084 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.271149 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.271165 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.271189 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.271206 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:08Z","lastTransitionTime":"2026-01-22T10:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.374210 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.374267 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.374285 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.374310 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.374326 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:08Z","lastTransitionTime":"2026-01-22T10:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.476761 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.476802 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.476817 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.476834 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.476846 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:08Z","lastTransitionTime":"2026-01-22T10:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.578941 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.578980 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.578991 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.579007 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.579019 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:08Z","lastTransitionTime":"2026-01-22T10:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.601040 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6bbb033b-8d31-4200-b77f-4910b5170085-metrics-certs\") pod \"network-metrics-daemon-69crw\" (UID: \"6bbb033b-8d31-4200-b77f-4910b5170085\") " pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:08 crc kubenswrapper[4752]: E0122 10:26:08.601227 4752 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 10:26:08 crc kubenswrapper[4752]: E0122 10:26:08.601309 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bbb033b-8d31-4200-b77f-4910b5170085-metrics-certs podName:6bbb033b-8d31-4200-b77f-4910b5170085 nodeName:}" failed. No retries permitted until 2026-01-22 10:26:16.601286728 +0000 UTC m=+55.831229676 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6bbb033b-8d31-4200-b77f-4910b5170085-metrics-certs") pod "network-metrics-daemon-69crw" (UID: "6bbb033b-8d31-4200-b77f-4910b5170085") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.681669 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.681739 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.681755 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.681777 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.681793 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:08Z","lastTransitionTime":"2026-01-22T10:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.785307 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.785381 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.785401 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.785428 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.785451 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:08Z","lastTransitionTime":"2026-01-22T10:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.887901 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.887937 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.887947 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.887961 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.887970 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:08Z","lastTransitionTime":"2026-01-22T10:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.990569 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.990598 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.990608 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.990623 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:08 crc kubenswrapper[4752]: I0122 10:26:08.990634 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:08Z","lastTransitionTime":"2026-01-22T10:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.077244 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 11:46:20.087098162 +0000 UTC Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.093696 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.093747 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.093764 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.093786 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.093802 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:09Z","lastTransitionTime":"2026-01-22T10:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.097270 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.097299 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:26:09 crc kubenswrapper[4752]: E0122 10:26:09.097464 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.097518 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:26:09 crc kubenswrapper[4752]: E0122 10:26:09.097784 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:26:09 crc kubenswrapper[4752]: E0122 10:26:09.097942 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.197716 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.197768 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.197786 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.197806 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.197818 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:09Z","lastTransitionTime":"2026-01-22T10:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.300614 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.300659 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.300669 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.300686 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.300697 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:09Z","lastTransitionTime":"2026-01-22T10:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.403982 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.404048 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.404070 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.404098 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.404120 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:09Z","lastTransitionTime":"2026-01-22T10:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.507385 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.507485 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.507506 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.508349 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.508650 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:09Z","lastTransitionTime":"2026-01-22T10:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.612414 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.612460 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.612476 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.612499 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.612517 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:09Z","lastTransitionTime":"2026-01-22T10:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.715455 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.715509 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.715527 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.715550 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.715566 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:09Z","lastTransitionTime":"2026-01-22T10:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.818400 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.818440 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.818451 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.818467 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.818477 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:09Z","lastTransitionTime":"2026-01-22T10:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.920777 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.920819 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.920829 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.920844 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:09 crc kubenswrapper[4752]: I0122 10:26:09.920884 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:09Z","lastTransitionTime":"2026-01-22T10:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.023706 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.023791 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.023810 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.023910 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.023934 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:10Z","lastTransitionTime":"2026-01-22T10:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.077926 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 22:11:35.02026296 +0000 UTC Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.097424 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:10 crc kubenswrapper[4752]: E0122 10:26:10.098024 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69crw" podUID="6bbb033b-8d31-4200-b77f-4910b5170085" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.126349 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.126409 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.126421 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.126437 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.126448 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:10Z","lastTransitionTime":"2026-01-22T10:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.228444 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.228768 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.229139 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.229478 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.229831 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:10Z","lastTransitionTime":"2026-01-22T10:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.332760 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.333196 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.333350 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.333497 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.333678 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:10Z","lastTransitionTime":"2026-01-22T10:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.436117 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.436160 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.436173 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.436188 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.436196 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:10Z","lastTransitionTime":"2026-01-22T10:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.539839 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.540172 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.540271 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.540380 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.540464 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:10Z","lastTransitionTime":"2026-01-22T10:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.643768 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.643835 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.644429 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.644486 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.644505 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:10Z","lastTransitionTime":"2026-01-22T10:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.748002 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.748081 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.748103 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.748133 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.748155 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:10Z","lastTransitionTime":"2026-01-22T10:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.851225 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.851467 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.851552 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.851680 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.851784 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:10Z","lastTransitionTime":"2026-01-22T10:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.928449 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:10 crc kubenswrapper[4752]: E0122 10:26:10.928717 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:42.928680237 +0000 UTC m=+82.158623185 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.928826 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:26:10 crc kubenswrapper[4752]: E0122 10:26:10.929026 4752 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 10:26:10 crc kubenswrapper[4752]: E0122 10:26:10.929102 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 10:26:42.929086007 +0000 UTC m=+82.159028945 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.955019 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.955252 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.955408 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.955546 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.955691 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:10Z","lastTransitionTime":"2026-01-22T10:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.977418 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.990019 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 22 10:26:10 crc kubenswrapper[4752]: I0122 10:26:10.995823 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:10Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.014556 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-69crw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bbb033b-8d31-4200-b77f-4910b5170085\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmmq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmmq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:26:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-69crw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.030404 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.030745 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.031011 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:26:11 crc kubenswrapper[4752]: E0122 10:26:11.030621 4752 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 10:26:11 crc kubenswrapper[4752]: E0122 10:26:11.031013 4752 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 10:26:11 crc kubenswrapper[4752]: E0122 10:26:11.031426 4752 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 10:26:11 crc kubenswrapper[4752]: E0122 10:26:11.031478 4752 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:26:11 crc kubenswrapper[4752]: E0122 10:26:11.031296 4752 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 10:26:11 crc kubenswrapper[4752]: E0122 10:26:11.031582 4752 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 10:26:11 crc kubenswrapper[4752]: E0122 10:26:11.031613 4752 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:26:11 crc kubenswrapper[4752]: E0122 10:26:11.031397 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 10:26:43.031375407 +0000 UTC m=+82.261318335 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 10:26:11 crc kubenswrapper[4752]: E0122 10:26:11.031695 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 10:26:43.031669825 +0000 UTC m=+82.261612773 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:26:11 crc kubenswrapper[4752]: E0122 10:26:11.031746 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 10:26:43.031728516 +0000 UTC m=+82.261671464 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.052030 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.058204 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.058237 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.058246 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.058262 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.058273 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:11Z","lastTransitionTime":"2026-01-22T10:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.075662 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.078255 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 01:55:54.181617915 +0000 UTC Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.094448 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.097730 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:26:11 crc kubenswrapper[4752]: E0122 10:26:11.097917 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.098009 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:26:11 crc kubenswrapper[4752]: E0122 10:26:11.098104 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.098261 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:26:11 crc kubenswrapper[4752]: E0122 10:26:11.098319 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.112516 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.126177 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.140505 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6v582" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db1e85a-38c2-41d7-8b2e-684b64e813be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bec36dff2cdfff4610daf95f1806e47f0a275b67df23f36e8dd4363baf29e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mj7kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6v582\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.158600 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.164015 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.164246 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.164383 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.164510 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.164636 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:11Z","lastTransitionTime":"2026-01-22T10:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.180255 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.217107 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946136c6c974ec251484c9443e50ec1b6c020abf734d1f697953af7803db3f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946136c6c974ec251484c9443e50ec1b6c020abf734d1f697953af7803db3f5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:25:59Z\\\",\\\"message\\\":\\\"25:58.585132 6208 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585188 6208 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585242 6208 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585290 6208 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585338 6208 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585755 6208 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.586946 6208 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 10:25:58.586968 6208 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 10:25:58.586990 6208 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 10:25:58.587017 6208 factory.go:656] Stopping watch factory\\\\nI0122 10:25:58.587033 6208 ovnkube.go:599] Stopped ovnkube\\\\nI0122 10:25:58.587055 6208 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 10:25:58.587065 6208 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 10:25:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-784rk_openshift-ovn-kubernetes(bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.235170 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.248222 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d1f73a22b61b7214f6d3f60b924dd1f2c82ecafa6190f3dcdceceea8d00ece0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.268358 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.268413 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.268429 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.268448 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.268464 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:11Z","lastTransitionTime":"2026-01-22T10:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.269318 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://198c87bacd03bbc21825aea034abc4d990d941e5869fd32afc17a944185aac93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.282892 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea20a00a-de56-4ce1-b008-6ebe5ba07354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c8f23b87fd82babb5fe7aabecbe891c628609aadb4ca645bcdfc5d60593e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4sscj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da111a1634107b11f9dc8909d6e65cd6ccaa586d556262db6f9e72b594cdb511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4sscj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.293320 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.305029 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.317772 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.334574 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.360383 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946136c6c974ec251484c9443e50ec1b6c020abf734d1f697953af7803db3f5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946136c6c974ec251484c9443e50ec1b6c020abf734d1f697953af7803db3f5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:25:59Z\\\",\\\"message\\\":\\\"25:58.585132 6208 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585188 6208 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585242 6208 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585290 6208 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585338 6208 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585755 6208 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.586946 6208 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 10:25:58.586968 6208 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 10:25:58.586990 6208 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 10:25:58.587017 6208 factory.go:656] Stopping watch factory\\\\nI0122 10:25:58.587033 6208 ovnkube.go:599] Stopped ovnkube\\\\nI0122 10:25:58.587055 6208 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 10:25:58.587065 6208 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 10:25:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-784rk_openshift-ovn-kubernetes(bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.370990 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.371031 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.371040 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.371057 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.371066 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:11Z","lastTransitionTime":"2026-01-22T10:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.374609 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6v582" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db1e85a-38c2-41d7-8b2e-684b64e813be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bec36dff2cdfff4610daf95f1806e47f0a275b67df23f36e8dd4363baf29e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mj7kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6v582\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.388472 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://198c87bacd03bbc21825aea034abc4d990d941e5869fd32afc17a944185aac93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.399896 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea20a00a-de56-4ce1-b008-6ebe5ba07354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c8f23b87fd82babb5fe7aabecbe891c628609aadb4ca645bcdfc5d60593e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4sscj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da111a1634107b11f9dc8909d6e65cd6ccaa586d556262db6f9e72b594cdb511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4sscj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.411195 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.424979 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.436974 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.451340 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d1f73a22b61b7214f6d3f60b924dd1f2c82ecafa6190f3dcdceceea8d00ece0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.474562 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.474656 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.474677 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.474986 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.475028 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:11Z","lastTransitionTime":"2026-01-22T10:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.477311 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.494157 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed11cd28-7c69-40fe-b189-6a9e3fdb1b40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60937db9f170129131f4a1a57506ef4a7531fa730e907c9e9e0fa47365c89441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191c5edbba8a6c821307be508d3dd5506f5866426d9c71935d329c1f50500c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d47441904b6b8d8f817e2d47598f828c18f1fd8d104c4e5c87e82d242794f570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a0f8deef98dbdcc1c90cc6be06afc57e83bd9ce88d0575c90ff328a0de3a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1a0f8deef98dbdcc1c90cc6be06afc57e83bd9ce88d0575c90ff328a0de3a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.516536 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.538726 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.556967 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-69crw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bbb033b-8d31-4200-b77f-4910b5170085\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmmq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmmq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:26:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-69crw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.575631 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.578146 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.578193 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.578241 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.578261 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.578277 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:11Z","lastTransitionTime":"2026-01-22T10:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.595364 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.611616 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:11Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.680708 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.680762 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.680773 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.680793 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.680805 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:11Z","lastTransitionTime":"2026-01-22T10:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.785463 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.785549 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.785565 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.785593 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.785614 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:11Z","lastTransitionTime":"2026-01-22T10:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.890149 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.890257 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.890291 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.890325 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.890350 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:11Z","lastTransitionTime":"2026-01-22T10:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.993966 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.994003 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.994014 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.994032 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:11 crc kubenswrapper[4752]: I0122 10:26:11.994044 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:11Z","lastTransitionTime":"2026-01-22T10:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.079239 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 18:00:17.893556389 +0000 UTC Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.097105 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.097143 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.097154 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.097180 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.097194 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:12Z","lastTransitionTime":"2026-01-22T10:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.097266 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:12 crc kubenswrapper[4752]: E0122 10:26:12.097459 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69crw" podUID="6bbb033b-8d31-4200-b77f-4910b5170085" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.201634 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.201719 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.201743 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.201781 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.201806 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:12Z","lastTransitionTime":"2026-01-22T10:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.305702 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.305761 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.305782 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.305813 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.305836 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:12Z","lastTransitionTime":"2026-01-22T10:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.408435 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.408478 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.408490 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.408506 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.408517 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:12Z","lastTransitionTime":"2026-01-22T10:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.513127 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.513180 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.513194 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.513213 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.513229 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:12Z","lastTransitionTime":"2026-01-22T10:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.616414 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.616464 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.616478 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.616505 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.616519 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:12Z","lastTransitionTime":"2026-01-22T10:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.719403 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.719501 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.719519 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.720006 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.720033 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:12Z","lastTransitionTime":"2026-01-22T10:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.826179 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.826229 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.826247 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.826269 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.826287 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:12Z","lastTransitionTime":"2026-01-22T10:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.929131 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.929191 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.929209 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.929231 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:12 crc kubenswrapper[4752]: I0122 10:26:12.929248 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:12Z","lastTransitionTime":"2026-01-22T10:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.032186 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.032253 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.032271 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.032297 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.032314 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:13Z","lastTransitionTime":"2026-01-22T10:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.079983 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 07:06:20.295464118 +0000 UTC Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.097442 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.097515 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:26:13 crc kubenswrapper[4752]: E0122 10:26:13.097598 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.097718 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:26:13 crc kubenswrapper[4752]: E0122 10:26:13.097907 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:26:13 crc kubenswrapper[4752]: E0122 10:26:13.098472 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.135142 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.135198 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.135246 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.135269 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.135289 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:13Z","lastTransitionTime":"2026-01-22T10:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.239226 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.239329 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.239448 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.239482 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.239503 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:13Z","lastTransitionTime":"2026-01-22T10:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.341445 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.341585 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.341606 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.341629 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.341647 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:13Z","lastTransitionTime":"2026-01-22T10:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:13 crc kubenswrapper[4752]: E0122 10:26:13.364605 4752 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6520db-2739-481f-9d91-77c81039e25e\\\",\\\"systemUUID\\\":\\\"d71a021a-a6a8-4801-b0d5-dbfd44512a09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.369521 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.369577 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.369594 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.369618 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.369636 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:13Z","lastTransitionTime":"2026-01-22T10:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:13 crc kubenswrapper[4752]: E0122 10:26:13.390119 4752 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6520db-2739-481f-9d91-77c81039e25e\\\",\\\"systemUUID\\\":\\\"d71a021a-a6a8-4801-b0d5-dbfd44512a09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.394600 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.394661 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.394678 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.394705 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.394726 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:13Z","lastTransitionTime":"2026-01-22T10:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:13 crc kubenswrapper[4752]: E0122 10:26:13.413527 4752 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6520db-2739-481f-9d91-77c81039e25e\\\",\\\"systemUUID\\\":\\\"d71a021a-a6a8-4801-b0d5-dbfd44512a09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.418247 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.418281 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.418291 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.418310 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.418324 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:13Z","lastTransitionTime":"2026-01-22T10:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:13 crc kubenswrapper[4752]: E0122 10:26:13.437301 4752 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6520db-2739-481f-9d91-77c81039e25e\\\",\\\"systemUUID\\\":\\\"d71a021a-a6a8-4801-b0d5-dbfd44512a09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.442494 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.442611 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.442630 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.442655 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.442673 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:13Z","lastTransitionTime":"2026-01-22T10:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:13 crc kubenswrapper[4752]: E0122 10:26:13.463605 4752 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6520db-2739-481f-9d91-77c81039e25e\\\",\\\"systemUUID\\\":\\\"d71a021a-a6a8-4801-b0d5-dbfd44512a09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:13Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:13 crc kubenswrapper[4752]: E0122 10:26:13.463998 4752 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.466245 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.466337 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.466369 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.466396 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.466416 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:13Z","lastTransitionTime":"2026-01-22T10:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.569952 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.570033 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.570055 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.570086 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.570109 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:13Z","lastTransitionTime":"2026-01-22T10:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.673741 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.673825 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.673848 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.673933 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.673972 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:13Z","lastTransitionTime":"2026-01-22T10:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.776490 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.776556 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.776579 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.776609 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.776632 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:13Z","lastTransitionTime":"2026-01-22T10:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.879119 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.879148 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.879158 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.879173 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.879184 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:13Z","lastTransitionTime":"2026-01-22T10:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.981273 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.981302 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.981312 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.981327 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:13 crc kubenswrapper[4752]: I0122 10:26:13.981337 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:13Z","lastTransitionTime":"2026-01-22T10:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.081101 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 22:54:40.947125771 +0000 UTC Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.083808 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.083831 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.083840 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.083888 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.083900 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:14Z","lastTransitionTime":"2026-01-22T10:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.097771 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:14 crc kubenswrapper[4752]: E0122 10:26:14.097923 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69crw" podUID="6bbb033b-8d31-4200-b77f-4910b5170085" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.186544 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.186582 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.186592 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.186606 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.186616 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:14Z","lastTransitionTime":"2026-01-22T10:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.314396 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.314464 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.314482 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.314508 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.314525 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:14Z","lastTransitionTime":"2026-01-22T10:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.416678 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.416714 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.416724 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.416739 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.416749 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:14Z","lastTransitionTime":"2026-01-22T10:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.519369 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.519437 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.519461 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.519490 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.519516 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:14Z","lastTransitionTime":"2026-01-22T10:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.622256 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.622317 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.622340 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.622365 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.622389 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:14Z","lastTransitionTime":"2026-01-22T10:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.725969 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.726034 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.726056 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.726082 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.726103 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:14Z","lastTransitionTime":"2026-01-22T10:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.830169 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.830258 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.830277 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.830305 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.830323 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:14Z","lastTransitionTime":"2026-01-22T10:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.932934 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.933016 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.933052 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.933082 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:14 crc kubenswrapper[4752]: I0122 10:26:14.933103 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:14Z","lastTransitionTime":"2026-01-22T10:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.035681 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.035751 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.035761 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.035799 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.035817 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:15Z","lastTransitionTime":"2026-01-22T10:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.081805 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 05:11:27.582392097 +0000 UTC Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.097636 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.097686 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.097734 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:26:15 crc kubenswrapper[4752]: E0122 10:26:15.097931 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:26:15 crc kubenswrapper[4752]: E0122 10:26:15.098127 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:26:15 crc kubenswrapper[4752]: E0122 10:26:15.098238 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.139002 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.139114 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.139132 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.139155 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.139173 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:15Z","lastTransitionTime":"2026-01-22T10:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.242365 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.242476 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.242518 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.242549 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.242573 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:15Z","lastTransitionTime":"2026-01-22T10:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.345895 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.345948 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.345965 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.345992 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.346012 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:15Z","lastTransitionTime":"2026-01-22T10:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.448300 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.448352 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.448370 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.448396 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.448416 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:15Z","lastTransitionTime":"2026-01-22T10:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.551498 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.551532 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.551542 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.551558 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.551569 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:15Z","lastTransitionTime":"2026-01-22T10:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.655249 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.655321 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.655342 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.655369 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.655387 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:15Z","lastTransitionTime":"2026-01-22T10:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.758201 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.758496 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.758625 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.758778 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.759010 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:15Z","lastTransitionTime":"2026-01-22T10:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.862404 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.862471 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.862492 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.862519 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.862538 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:15Z","lastTransitionTime":"2026-01-22T10:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.965237 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.965285 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.965301 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.965323 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:15 crc kubenswrapper[4752]: I0122 10:26:15.965342 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:15Z","lastTransitionTime":"2026-01-22T10:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.067987 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.068029 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.068038 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.068054 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.068063 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:16Z","lastTransitionTime":"2026-01-22T10:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.082450 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 14:10:55.079496256 +0000 UTC Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.096977 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:16 crc kubenswrapper[4752]: E0122 10:26:16.097207 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69crw" podUID="6bbb033b-8d31-4200-b77f-4910b5170085" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.170639 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.170691 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.170702 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.170716 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.170725 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:16Z","lastTransitionTime":"2026-01-22T10:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.273223 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.273263 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.273273 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.273290 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.273302 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:16Z","lastTransitionTime":"2026-01-22T10:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.376817 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.377117 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.377231 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.377320 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.377390 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:16Z","lastTransitionTime":"2026-01-22T10:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.480184 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.480265 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.480287 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.480323 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.480347 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:16Z","lastTransitionTime":"2026-01-22T10:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.583446 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.583523 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.583540 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.583567 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.583587 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:16Z","lastTransitionTime":"2026-01-22T10:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.687322 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.687413 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.687432 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.687463 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.687489 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:16Z","lastTransitionTime":"2026-01-22T10:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.694719 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6bbb033b-8d31-4200-b77f-4910b5170085-metrics-certs\") pod \"network-metrics-daemon-69crw\" (UID: \"6bbb033b-8d31-4200-b77f-4910b5170085\") " pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:16 crc kubenswrapper[4752]: E0122 10:26:16.694910 4752 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 10:26:16 crc kubenswrapper[4752]: E0122 10:26:16.694993 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bbb033b-8d31-4200-b77f-4910b5170085-metrics-certs podName:6bbb033b-8d31-4200-b77f-4910b5170085 nodeName:}" failed. No retries permitted until 2026-01-22 10:26:32.694969728 +0000 UTC m=+71.924912676 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6bbb033b-8d31-4200-b77f-4910b5170085-metrics-certs") pod "network-metrics-daemon-69crw" (UID: "6bbb033b-8d31-4200-b77f-4910b5170085") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.790149 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.790476 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.790676 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.790885 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.791048 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:16Z","lastTransitionTime":"2026-01-22T10:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.895007 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.895191 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.895344 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.895412 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.895489 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:16Z","lastTransitionTime":"2026-01-22T10:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.999825 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:16 crc kubenswrapper[4752]: I0122 10:26:16.999945 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.000025 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.000110 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.000177 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:17Z","lastTransitionTime":"2026-01-22T10:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.082958 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 19:51:38.400931724 +0000 UTC Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.097741 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.097835 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:26:17 crc kubenswrapper[4752]: E0122 10:26:17.097965 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:26:17 crc kubenswrapper[4752]: E0122 10:26:17.098090 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.098473 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:26:17 crc kubenswrapper[4752]: E0122 10:26:17.098919 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.103161 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.103225 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.103244 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.103270 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.103289 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:17Z","lastTransitionTime":"2026-01-22T10:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.206500 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.206730 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.206772 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.206806 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.206832 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:17Z","lastTransitionTime":"2026-01-22T10:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.309701 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.309997 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.310071 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.310152 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.310222 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:17Z","lastTransitionTime":"2026-01-22T10:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.413656 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.413766 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.413791 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.413816 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.413833 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:17Z","lastTransitionTime":"2026-01-22T10:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.517216 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.517301 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.517321 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.517344 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.517360 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:17Z","lastTransitionTime":"2026-01-22T10:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.621165 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.621238 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.621249 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.621262 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.621270 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:17Z","lastTransitionTime":"2026-01-22T10:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.724094 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.724149 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.724166 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.724189 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.724206 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:17Z","lastTransitionTime":"2026-01-22T10:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.827456 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.827527 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.827546 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.827571 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.827591 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:17Z","lastTransitionTime":"2026-01-22T10:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.929921 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.929987 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.930005 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.930317 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:17 crc kubenswrapper[4752]: I0122 10:26:17.930626 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:17Z","lastTransitionTime":"2026-01-22T10:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.034618 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.034675 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.034686 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.034705 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.034717 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:18Z","lastTransitionTime":"2026-01-22T10:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.083226 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 04:12:16.346364618 +0000 UTC Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.097896 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:18 crc kubenswrapper[4752]: E0122 10:26:18.098055 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69crw" podUID="6bbb033b-8d31-4200-b77f-4910b5170085" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.100045 4752 scope.go:117] "RemoveContainer" containerID="946136c6c974ec251484c9443e50ec1b6c020abf734d1f697953af7803db3f5d" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.138055 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.138176 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.138203 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.138233 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.138257 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:18Z","lastTransitionTime":"2026-01-22T10:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.240945 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.240974 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.240982 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.240998 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.241007 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:18Z","lastTransitionTime":"2026-01-22T10:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.343503 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.343552 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.343563 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.343579 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.343590 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:18Z","lastTransitionTime":"2026-01-22T10:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.446131 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.446175 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.446184 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.446198 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.446207 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:18Z","lastTransitionTime":"2026-01-22T10:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.484651 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-784rk_bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25/ovnkube-controller/1.log" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.488001 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" event={"ID":"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25","Type":"ContainerStarted","Data":"9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6"} Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.488937 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.511387 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.525668 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d1f73a22b61b7214f6d3f60b924dd1f2c82ecafa6190f3dcdceceea8d00ece0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.546325 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://198c87bacd03bbc21825aea034abc4d990d941e5869fd32afc17a944185aac93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.548017 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.548052 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.548064 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.548079 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.548090 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:18Z","lastTransitionTime":"2026-01-22T10:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.556047 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea20a00a-de56-4ce1-b008-6ebe5ba07354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c8f23b87fd82babb5fe7aabecbe891c628609aadb4ca645bcdfc5d60593e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4sscj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da111a1634107b11f9dc8909d6e65cd6ccaa586d556262db6f9e72b594cdb511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4sscj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.567167 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.580516 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.591526 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.603120 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-69crw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bbb033b-8d31-4200-b77f-4910b5170085\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmmq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmmq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:26:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-69crw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.626078 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.639907 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed11cd28-7c69-40fe-b189-6a9e3fdb1b40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60937db9f170129131f4a1a57506ef4a7531fa730e907c9e9e0fa47365c89441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191c5edbba8a6c821307be508d3dd5506f5866426d9c71935d329c1f50500c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d47441904b6b8d8f817e2d47598f828c18f1fd8d104c4e5c87e82d242794f570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a0f8deef98dbdcc1c90cc6be06afc57e83bd9ce88d0575c90ff328a0de3a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1a0f8deef98dbdcc1c90cc6be06afc57e83bd9ce88d0575c90ff328a0de3a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.649978 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.650039 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.650057 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.650078 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.650094 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:18Z","lastTransitionTime":"2026-01-22T10:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.654640 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.670947 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.683190 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.693680 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.703738 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6v582" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db1e85a-38c2-41d7-8b2e-684b64e813be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bec36dff2cdfff4610daf95f1806e47f0a275b67df23f36e8dd4363baf29e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mj7kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6v582\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.730918 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.742670 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.752369 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.752407 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.752417 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.752429 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.752438 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:18Z","lastTransitionTime":"2026-01-22T10:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.763245 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946136c6c974ec251484c9443e50ec1b6c020abf734d1f697953af7803db3f5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:25:59Z\\\",\\\"message\\\":\\\"25:58.585132 6208 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585188 6208 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585242 6208 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585290 6208 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585338 6208 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585755 6208 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.586946 6208 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 10:25:58.586968 6208 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 10:25:58.586990 6208 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 10:25:58.587017 6208 factory.go:656] Stopping watch factory\\\\nI0122 10:25:58.587033 6208 ovnkube.go:599] Stopped ovnkube\\\\nI0122 10:25:58.587055 6208 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 10:25:58.587065 6208 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 10:25:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:18Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.854885 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.854934 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.854946 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.854963 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.854974 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:18Z","lastTransitionTime":"2026-01-22T10:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.957618 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.957666 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.957674 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.957688 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:18 crc kubenswrapper[4752]: I0122 10:26:18.957700 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:18Z","lastTransitionTime":"2026-01-22T10:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.060731 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.060813 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.060824 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.060841 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.060869 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:19Z","lastTransitionTime":"2026-01-22T10:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.083887 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 20:26:11.858723214 +0000 UTC Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.097814 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.097825 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.097834 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:26:19 crc kubenswrapper[4752]: E0122 10:26:19.098045 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:26:19 crc kubenswrapper[4752]: E0122 10:26:19.098149 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:26:19 crc kubenswrapper[4752]: E0122 10:26:19.098267 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.164849 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.164949 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.164969 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.164999 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.165020 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:19Z","lastTransitionTime":"2026-01-22T10:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.268802 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.269399 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.269412 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.269435 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.269450 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:19Z","lastTransitionTime":"2026-01-22T10:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.373091 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.373188 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.373204 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.373229 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.373247 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:19Z","lastTransitionTime":"2026-01-22T10:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.476716 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.476773 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.476786 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.476807 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.476824 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:19Z","lastTransitionTime":"2026-01-22T10:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.493461 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-784rk_bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25/ovnkube-controller/2.log" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.494588 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-784rk_bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25/ovnkube-controller/1.log" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.499580 4752 generic.go:334] "Generic (PLEG): container finished" podID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerID="9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6" exitCode=1 Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.499674 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" event={"ID":"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25","Type":"ContainerDied","Data":"9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6"} Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.499728 4752 scope.go:117] "RemoveContainer" containerID="946136c6c974ec251484c9443e50ec1b6c020abf734d1f697953af7803db3f5d" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.500603 4752 scope.go:117] "RemoveContainer" containerID="9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6" Jan 22 10:26:19 crc kubenswrapper[4752]: E0122 10:26:19.500827 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-784rk_openshift-ovn-kubernetes(bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25)\"" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.518883 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.538409 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.563229 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946136c6c974ec251484c9443e50ec1b6c020abf734d1f697953af7803db3f5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:25:59Z\\\",\\\"message\\\":\\\"25:58.585132 6208 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585188 6208 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585242 6208 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585290 6208 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585338 6208 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.585755 6208 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 10:25:58.586946 6208 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 10:25:58.586968 6208 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 10:25:58.586990 6208 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 10:25:58.587017 6208 factory.go:656] Stopping watch factory\\\\nI0122 10:25:58.587033 6208 ovnkube.go:599] Stopped ovnkube\\\\nI0122 10:25:58.587055 6208 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 10:25:58.587065 6208 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 10:25:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:26:19Z\\\",\\\"message\\\":\\\"ne-api/machine-api-operator per-node LB for network=default: []services.LB{}\\\\nI0122 10:26:18.910717 6437 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nF0122 10:26:18.910724 6437 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:18Z is after 2025-08-24T17:21:41Z]\\\\nI0122 10:26:18.910734 6437 services_controller.go:454] Service openshift-machine-api/machine-api-operator for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0122 10:26:18.910705\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.578812 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6v582" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db1e85a-38c2-41d7-8b2e-684b64e813be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bec36dff2cdfff4610daf95f1806e47f0a275b67df23f36e8dd4363baf29e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mj7kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6v582\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.579964 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.580150 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.580238 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.580348 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.580426 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:19Z","lastTransitionTime":"2026-01-22T10:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.592337 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.608814 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.629470 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.647098 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d1f73a22b61b7214f6d3f60b924dd1f2c82ecafa6190f3dcdceceea8d00ece0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.665281 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://198c87bacd03bbc21825aea034abc4d990d941e5869fd32afc17a944185aac93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.680972 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea20a00a-de56-4ce1-b008-6ebe5ba07354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c8f23b87fd82babb5fe7aabecbe891c628609aadb4ca645bcdfc5d60593e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4sscj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da111a1634107b11f9dc8909d6e65cd6ccaa586d556262db6f9e72b594cdb511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4sscj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.683963 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.684016 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.684027 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.684047 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.684068 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:19Z","lastTransitionTime":"2026-01-22T10:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.713015 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.728952 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed11cd28-7c69-40fe-b189-6a9e3fdb1b40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60937db9f170129131f4a1a57506ef4a7531fa730e907c9e9e0fa47365c89441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191c5edbba8a6c821307be508d3dd5506f5866426d9c71935d329c1f50500c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d47441904b6b8d8f817e2d47598f828c18f1fd8d104c4e5c87e82d242794f570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a0f8deef98dbdcc1c90cc6be06afc57e83bd9ce88d0575c90ff328a0de3a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1a0f8deef98dbdcc1c90cc6be06afc57e83bd9ce88d0575c90ff328a0de3a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.745663 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.765168 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.779629 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-69crw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bbb033b-8d31-4200-b77f-4910b5170085\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmmq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmmq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:26:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-69crw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.787135 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.787200 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.787220 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.787244 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.787266 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:19Z","lastTransitionTime":"2026-01-22T10:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.797417 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.811932 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.830650 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:19Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.889825 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.889910 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.889930 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.889954 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.889971 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:19Z","lastTransitionTime":"2026-01-22T10:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.992396 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.992461 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.992481 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.992506 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:19 crc kubenswrapper[4752]: I0122 10:26:19.992524 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:19Z","lastTransitionTime":"2026-01-22T10:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.084183 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 23:12:59.291320876 +0000 UTC Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.095798 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.095909 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.095938 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.095977 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.096003 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:20Z","lastTransitionTime":"2026-01-22T10:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.097496 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:20 crc kubenswrapper[4752]: E0122 10:26:20.097735 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69crw" podUID="6bbb033b-8d31-4200-b77f-4910b5170085" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.198686 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.198756 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.198773 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.198799 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.198817 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:20Z","lastTransitionTime":"2026-01-22T10:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.302111 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.302191 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.302215 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.302246 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.302268 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:20Z","lastTransitionTime":"2026-01-22T10:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.405646 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.405719 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.405741 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.405769 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.406013 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:20Z","lastTransitionTime":"2026-01-22T10:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.506407 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-784rk_bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25/ovnkube-controller/2.log" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.508757 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.508819 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.508841 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.508913 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.508939 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:20Z","lastTransitionTime":"2026-01-22T10:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.512530 4752 scope.go:117] "RemoveContainer" containerID="9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6" Jan 22 10:26:20 crc kubenswrapper[4752]: E0122 10:26:20.512811 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-784rk_openshift-ovn-kubernetes(bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25)\"" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.528123 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:20Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.544560 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:20Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.558784 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:20Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.577876 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:20Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.597644 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:20Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.618939 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.619011 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.619030 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.619481 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.619563 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:20Z","lastTransitionTime":"2026-01-22T10:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.625527 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:26:19Z\\\",\\\"message\\\":\\\"ne-api/machine-api-operator per-node LB for network=default: []services.LB{}\\\\nI0122 10:26:18.910717 6437 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nF0122 10:26:18.910724 6437 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:18Z is after 2025-08-24T17:21:41Z]\\\\nI0122 10:26:18.910734 6437 services_controller.go:454] Service openshift-machine-api/machine-api-operator for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0122 10:26:18.910705\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:26:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-784rk_openshift-ovn-kubernetes(bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:20Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.641894 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6v582" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db1e85a-38c2-41d7-8b2e-684b64e813be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bec36dff2cdfff4610daf95f1806e47f0a275b67df23f36e8dd4363baf29e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mj7kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6v582\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:20Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.680719 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://198c87bacd03bbc21825aea034abc4d990d941e5869fd32afc17a944185aac93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:20Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.701952 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea20a00a-de56-4ce1-b008-6ebe5ba07354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c8f23b87fd82babb5fe7aabecbe891c628609aadb4ca645bcdfc5d60593e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4sscj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da111a1634107b11f9dc8909d6e65cd6ccaa586d556262db6f9e72b594cdb511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4sscj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:20Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.722387 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.722419 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.722427 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.722441 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.722451 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:20Z","lastTransitionTime":"2026-01-22T10:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.725377 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:20Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.744265 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:20Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.756417 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:20Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.767211 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d1f73a22b61b7214f6d3f60b924dd1f2c82ecafa6190f3dcdceceea8d00ece0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:20Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.787578 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:20Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.801592 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed11cd28-7c69-40fe-b189-6a9e3fdb1b40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60937db9f170129131f4a1a57506ef4a7531fa730e907c9e9e0fa47365c89441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191c5edbba8a6c821307be508d3dd5506f5866426d9c71935d329c1f50500c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d47441904b6b8d8f817e2d47598f828c18f1fd8d104c4e5c87e82d242794f570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a0f8deef98dbdcc1c90cc6be06afc57e83bd9ce88d0575c90ff328a0de3a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1a0f8deef98dbdcc1c90cc6be06afc57e83bd9ce88d0575c90ff328a0de3a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:20Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.815214 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:20Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.824321 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.824346 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.824355 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.824371 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.824383 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:20Z","lastTransitionTime":"2026-01-22T10:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.830281 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:20Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.841711 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-69crw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bbb033b-8d31-4200-b77f-4910b5170085\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmmq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmmq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:26:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-69crw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:20Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.926642 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.926700 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.926718 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.926742 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:20 crc kubenswrapper[4752]: I0122 10:26:20.926760 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:20Z","lastTransitionTime":"2026-01-22T10:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.029217 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.029279 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.029302 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.029330 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.029351 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:21Z","lastTransitionTime":"2026-01-22T10:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.085007 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 14:04:28.248717408 +0000 UTC Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.096980 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:26:21 crc kubenswrapper[4752]: E0122 10:26:21.097372 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.097660 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:26:21 crc kubenswrapper[4752]: E0122 10:26:21.098420 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.097804 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:26:21 crc kubenswrapper[4752]: E0122 10:26:21.098672 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.116141 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed11cd28-7c69-40fe-b189-6a9e3fdb1b40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60937db9f170129131f4a1a57506ef4a7531fa730e907c9e9e0fa47365c89441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191c5edbba8a6c821307be508d3dd5506f5866426d9c71935d329c1f50500c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d47441904b6b8d8f817e2d47598f828c18f1fd8d104c4e5c87e82d242794f570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a0f8deef98dbdcc1c90cc6be06afc57e83bd9ce88d0575c90ff328a0de3a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1a0f8deef98dbdcc1c90cc6be06afc57e83bd9ce88d0575c90ff328a0de3a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:21Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.133207 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.133273 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.133291 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.133323 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.133341 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:21Z","lastTransitionTime":"2026-01-22T10:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.138901 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:21Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.153681 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:21Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.164077 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-69crw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bbb033b-8d31-4200-b77f-4910b5170085\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmmq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmmq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:26:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-69crw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:21Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.196107 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:21Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.217824 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:21Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.234544 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:21Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.237019 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.237068 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.237079 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.237097 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.237109 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:21Z","lastTransitionTime":"2026-01-22T10:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.250198 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:21Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.269574 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:21Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.301119 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:26:19Z\\\",\\\"message\\\":\\\"ne-api/machine-api-operator per-node LB for network=default: []services.LB{}\\\\nI0122 10:26:18.910717 6437 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nF0122 10:26:18.910724 6437 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:18Z is after 2025-08-24T17:21:41Z]\\\\nI0122 10:26:18.910734 6437 services_controller.go:454] Service openshift-machine-api/machine-api-operator for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0122 10:26:18.910705\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:26:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-784rk_openshift-ovn-kubernetes(bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:21Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.315413 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6v582" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db1e85a-38c2-41d7-8b2e-684b64e813be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bec36dff2cdfff4610daf95f1806e47f0a275b67df23f36e8dd4363baf29e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mj7kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6v582\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:21Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.331951 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:21Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.339507 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.339550 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.339562 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.339580 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.339592 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:21Z","lastTransitionTime":"2026-01-22T10:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.348457 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:21Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.364713 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:21Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.378371 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:21Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.394520 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d1f73a22b61b7214f6d3f60b924dd1f2c82ecafa6190f3dcdceceea8d00ece0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:21Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.414527 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://198c87bacd03bbc21825aea034abc4d990d941e5869fd32afc17a944185aac93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:21Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.428541 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea20a00a-de56-4ce1-b008-6ebe5ba07354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c8f23b87fd82babb5fe7aabecbe891c628609aadb4ca645bcdfc5d60593e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4sscj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da111a1634107b11f9dc8909d6e65cd6ccaa586d556262db6f9e72b594cdb511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4sscj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:21Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.442847 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.442899 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.442910 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.442926 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.442935 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:21Z","lastTransitionTime":"2026-01-22T10:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.545788 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.545898 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.545914 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.545942 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.545959 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:21Z","lastTransitionTime":"2026-01-22T10:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.649264 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.649545 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.649568 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.649589 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.649605 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:21Z","lastTransitionTime":"2026-01-22T10:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.752682 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.752723 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.752733 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.752751 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.752766 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:21Z","lastTransitionTime":"2026-01-22T10:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.856084 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.856168 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.856216 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.856236 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.856251 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:21Z","lastTransitionTime":"2026-01-22T10:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.959091 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.959142 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.959151 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.959163 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:21 crc kubenswrapper[4752]: I0122 10:26:21.959173 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:21Z","lastTransitionTime":"2026-01-22T10:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.062312 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.062348 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.062358 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.062373 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.062385 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:22Z","lastTransitionTime":"2026-01-22T10:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.085901 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 14:09:47.433452671 +0000 UTC Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.097144 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:22 crc kubenswrapper[4752]: E0122 10:26:22.097235 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69crw" podUID="6bbb033b-8d31-4200-b77f-4910b5170085" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.165759 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.165829 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.165845 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.165886 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.165902 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:22Z","lastTransitionTime":"2026-01-22T10:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.268940 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.269057 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.269081 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.269111 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.269134 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:22Z","lastTransitionTime":"2026-01-22T10:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.374628 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.374704 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.374716 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.374753 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.374769 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:22Z","lastTransitionTime":"2026-01-22T10:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.477395 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.477444 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.477455 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.477476 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.477489 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:22Z","lastTransitionTime":"2026-01-22T10:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.579958 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.580052 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.580067 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.580085 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.580099 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:22Z","lastTransitionTime":"2026-01-22T10:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.682978 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.683041 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.683059 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.683084 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.683104 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:22Z","lastTransitionTime":"2026-01-22T10:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.786438 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.786833 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.787119 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.787300 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.787492 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:22Z","lastTransitionTime":"2026-01-22T10:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.891168 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.891240 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.891262 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.891289 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.891313 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:22Z","lastTransitionTime":"2026-01-22T10:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.994975 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.995039 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.995062 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.995090 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:22 crc kubenswrapper[4752]: I0122 10:26:22.995107 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:22Z","lastTransitionTime":"2026-01-22T10:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.086594 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 23:00:01.284015878 +0000 UTC Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.096819 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:26:23 crc kubenswrapper[4752]: E0122 10:26:23.097013 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.097089 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:26:23 crc kubenswrapper[4752]: E0122 10:26:23.097260 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.097510 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:26:23 crc kubenswrapper[4752]: E0122 10:26:23.097808 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.099745 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.099800 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.099810 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.099832 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.099844 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:23Z","lastTransitionTime":"2026-01-22T10:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.202407 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.202446 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.202458 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.202475 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.202488 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:23Z","lastTransitionTime":"2026-01-22T10:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.305202 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.305493 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.305611 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.305730 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.305837 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:23Z","lastTransitionTime":"2026-01-22T10:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.409061 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.409280 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.409348 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.409406 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.409462 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:23Z","lastTransitionTime":"2026-01-22T10:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.512700 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.513199 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.513334 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.513459 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.513569 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:23Z","lastTransitionTime":"2026-01-22T10:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.616407 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.616484 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.616504 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.616533 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.616551 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:23Z","lastTransitionTime":"2026-01-22T10:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.618278 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.618465 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.618576 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.618897 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.619125 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:23Z","lastTransitionTime":"2026-01-22T10:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:23 crc kubenswrapper[4752]: E0122 10:26:23.634849 4752 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6520db-2739-481f-9d91-77c81039e25e\\\",\\\"systemUUID\\\":\\\"d71a021a-a6a8-4801-b0d5-dbfd44512a09\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:23Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.640788 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.640899 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.640924 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.640953 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.640973 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:23Z","lastTransitionTime":"2026-01-22T10:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:23 crc kubenswrapper[4752]: E0122 10:26:23.658190 4752 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6520db-2739-481f-9d91-77c81039e25e\\\",\\\"systemUUID\\\":\\\"d71a021a-a6a8-4801-b0d5-dbfd44512a09\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:23Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.662842 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.663020 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.663116 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.663207 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.663292 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:23Z","lastTransitionTime":"2026-01-22T10:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:23 crc kubenswrapper[4752]: E0122 10:26:23.680222 4752 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6520db-2739-481f-9d91-77c81039e25e\\\",\\\"systemUUID\\\":\\\"d71a021a-a6a8-4801-b0d5-dbfd44512a09\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:23Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.684810 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.685054 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.685145 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.685268 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.685354 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:23Z","lastTransitionTime":"2026-01-22T10:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:23 crc kubenswrapper[4752]: E0122 10:26:23.703695 4752 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6520db-2739-481f-9d91-77c81039e25e\\\",\\\"systemUUID\\\":\\\"d71a021a-a6a8-4801-b0d5-dbfd44512a09\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:23Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.709667 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.709805 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.709916 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.710032 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.710151 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:23Z","lastTransitionTime":"2026-01-22T10:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:23 crc kubenswrapper[4752]: E0122 10:26:23.729661 4752 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6520db-2739-481f-9d91-77c81039e25e\\\",\\\"systemUUID\\\":\\\"d71a021a-a6a8-4801-b0d5-dbfd44512a09\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:23Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:23 crc kubenswrapper[4752]: E0122 10:26:23.730078 4752 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.731764 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.731933 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.732150 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.732357 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.732580 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:23Z","lastTransitionTime":"2026-01-22T10:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.836532 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.837331 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.837501 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.837649 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.837792 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:23Z","lastTransitionTime":"2026-01-22T10:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.942387 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.942482 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.942500 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.942543 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:23 crc kubenswrapper[4752]: I0122 10:26:23.942576 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:23Z","lastTransitionTime":"2026-01-22T10:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.052893 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.052951 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.052965 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.052982 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.052994 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:24Z","lastTransitionTime":"2026-01-22T10:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.088514 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 18:56:07.905468488 +0000 UTC Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.096889 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:24 crc kubenswrapper[4752]: E0122 10:26:24.097139 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69crw" podUID="6bbb033b-8d31-4200-b77f-4910b5170085" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.157017 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.157073 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.157093 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.157120 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.157138 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:24Z","lastTransitionTime":"2026-01-22T10:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.261002 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.261058 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.261074 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.261096 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.261114 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:24Z","lastTransitionTime":"2026-01-22T10:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.364365 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.364422 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.364438 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.364464 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.364480 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:24Z","lastTransitionTime":"2026-01-22T10:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.467953 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.468016 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.468032 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.468056 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.468074 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:24Z","lastTransitionTime":"2026-01-22T10:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.571530 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.571624 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.571643 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.571673 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.571693 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:24Z","lastTransitionTime":"2026-01-22T10:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.675284 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.675353 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.675369 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.675397 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.675418 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:24Z","lastTransitionTime":"2026-01-22T10:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.779170 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.779251 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.779272 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.779302 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.779320 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:24Z","lastTransitionTime":"2026-01-22T10:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.882818 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.882955 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.882982 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.883009 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.883025 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:24Z","lastTransitionTime":"2026-01-22T10:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.986593 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.986653 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.986663 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.986685 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:24 crc kubenswrapper[4752]: I0122 10:26:24.986697 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:24Z","lastTransitionTime":"2026-01-22T10:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.088953 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 10:40:09.459674997 +0000 UTC Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.089498 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.089548 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.089561 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.089582 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.089597 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:25Z","lastTransitionTime":"2026-01-22T10:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.097596 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.098124 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:26:25 crc kubenswrapper[4752]: E0122 10:26:25.098260 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.098281 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:26:25 crc kubenswrapper[4752]: E0122 10:26:25.098463 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:26:25 crc kubenswrapper[4752]: E0122 10:26:25.098553 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.192613 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.192690 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.192707 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.192734 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.192755 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:25Z","lastTransitionTime":"2026-01-22T10:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.295314 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.295399 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.295423 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.295454 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.295473 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:25Z","lastTransitionTime":"2026-01-22T10:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.398346 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.398398 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.398409 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.398431 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.398445 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:25Z","lastTransitionTime":"2026-01-22T10:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.500296 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.500329 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.500339 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.500354 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.500366 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:25Z","lastTransitionTime":"2026-01-22T10:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.602758 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.602841 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.603132 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.603173 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.603194 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:25Z","lastTransitionTime":"2026-01-22T10:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.705828 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.705995 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.706014 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.706051 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.706071 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:25Z","lastTransitionTime":"2026-01-22T10:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.809086 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.809189 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.809225 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.809255 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.809280 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:25Z","lastTransitionTime":"2026-01-22T10:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.912313 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.912384 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.912399 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.912418 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:25 crc kubenswrapper[4752]: I0122 10:26:25.912453 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:25Z","lastTransitionTime":"2026-01-22T10:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.015532 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.015593 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.015616 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.015645 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.015667 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:26Z","lastTransitionTime":"2026-01-22T10:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.089113 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 19:04:13.711171168 +0000 UTC Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.097513 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:26 crc kubenswrapper[4752]: E0122 10:26:26.097719 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69crw" podUID="6bbb033b-8d31-4200-b77f-4910b5170085" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.118256 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.118327 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.118343 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.118369 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.118383 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:26Z","lastTransitionTime":"2026-01-22T10:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.221465 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.221511 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.221523 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.221541 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.221554 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:26Z","lastTransitionTime":"2026-01-22T10:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.324404 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.324468 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.324484 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.324508 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.324527 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:26Z","lastTransitionTime":"2026-01-22T10:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.426691 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.426762 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.426773 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.426800 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.426815 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:26Z","lastTransitionTime":"2026-01-22T10:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.529084 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.529113 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.529125 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.529138 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.529146 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:26Z","lastTransitionTime":"2026-01-22T10:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.630488 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.630525 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.630538 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.630554 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.630566 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:26Z","lastTransitionTime":"2026-01-22T10:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.733800 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.733892 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.733918 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.733944 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.733966 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:26Z","lastTransitionTime":"2026-01-22T10:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.836521 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.836552 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.836562 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.836578 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.836589 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:26Z","lastTransitionTime":"2026-01-22T10:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.938516 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.938548 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.938562 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.938603 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:26 crc kubenswrapper[4752]: I0122 10:26:26.938616 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:26Z","lastTransitionTime":"2026-01-22T10:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.040718 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.040783 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.040800 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.040825 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.040845 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:27Z","lastTransitionTime":"2026-01-22T10:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.089716 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 06:46:17.445470041 +0000 UTC Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.097185 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.097242 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.097196 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:26:27 crc kubenswrapper[4752]: E0122 10:26:27.097350 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:26:27 crc kubenswrapper[4752]: E0122 10:26:27.097473 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:26:27 crc kubenswrapper[4752]: E0122 10:26:27.097765 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.142494 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.142556 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.142574 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.142600 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.142619 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:27Z","lastTransitionTime":"2026-01-22T10:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.245260 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.245304 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.245317 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.245335 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.245347 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:27Z","lastTransitionTime":"2026-01-22T10:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.348555 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.348619 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.348631 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.348653 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.348672 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:27Z","lastTransitionTime":"2026-01-22T10:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.451110 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.451165 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.451179 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.451200 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.451212 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:27Z","lastTransitionTime":"2026-01-22T10:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.553994 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.554039 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.554053 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.554068 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.554085 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:27Z","lastTransitionTime":"2026-01-22T10:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.656415 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.656487 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.656498 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.656522 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.656534 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:27Z","lastTransitionTime":"2026-01-22T10:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.759524 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.759611 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.759628 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.759657 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.759677 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:27Z","lastTransitionTime":"2026-01-22T10:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.862690 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.862744 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.862757 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.862778 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.862792 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:27Z","lastTransitionTime":"2026-01-22T10:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.966203 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.966261 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.966277 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.966298 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:27 crc kubenswrapper[4752]: I0122 10:26:27.966312 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:27Z","lastTransitionTime":"2026-01-22T10:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.068992 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.069038 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.069046 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.069066 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.069075 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:28Z","lastTransitionTime":"2026-01-22T10:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.090457 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 15:50:12.797202066 +0000 UTC Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.096807 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:28 crc kubenswrapper[4752]: E0122 10:26:28.097018 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69crw" podUID="6bbb033b-8d31-4200-b77f-4910b5170085" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.172138 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.172311 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.172327 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.172349 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.172364 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:28Z","lastTransitionTime":"2026-01-22T10:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.275482 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.275542 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.275565 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.275587 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.275600 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:28Z","lastTransitionTime":"2026-01-22T10:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.378761 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.378809 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.378826 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.378844 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.378874 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:28Z","lastTransitionTime":"2026-01-22T10:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.481372 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.481436 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.481454 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.481478 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.481496 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:28Z","lastTransitionTime":"2026-01-22T10:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.585316 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.585396 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.585412 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.585435 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.585449 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:28Z","lastTransitionTime":"2026-01-22T10:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.688682 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.688772 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.688811 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.688848 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.688906 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:28Z","lastTransitionTime":"2026-01-22T10:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.790982 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.791017 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.791026 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.791040 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.791050 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:28Z","lastTransitionTime":"2026-01-22T10:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.893782 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.893868 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.893879 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.893892 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.893901 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:28Z","lastTransitionTime":"2026-01-22T10:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.995928 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.996012 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.996027 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.996044 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:28 crc kubenswrapper[4752]: I0122 10:26:28.996073 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:28Z","lastTransitionTime":"2026-01-22T10:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.091159 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 04:28:41.004864054 +0000 UTC Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.097450 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.097486 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.097450 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:26:29 crc kubenswrapper[4752]: E0122 10:26:29.097599 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:26:29 crc kubenswrapper[4752]: E0122 10:26:29.097673 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:26:29 crc kubenswrapper[4752]: E0122 10:26:29.097749 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.099766 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.099794 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.099805 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.099822 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.099834 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:29Z","lastTransitionTime":"2026-01-22T10:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.203174 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.203225 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.203237 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.203256 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.203270 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:29Z","lastTransitionTime":"2026-01-22T10:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.305788 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.305833 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.305842 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.305878 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.305892 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:29Z","lastTransitionTime":"2026-01-22T10:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.409323 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.409386 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.409403 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.409431 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.409452 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:29Z","lastTransitionTime":"2026-01-22T10:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.513169 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.513249 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.513278 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.513302 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.513314 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:29Z","lastTransitionTime":"2026-01-22T10:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.616247 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.616303 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.616320 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.616344 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.616361 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:29Z","lastTransitionTime":"2026-01-22T10:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.718845 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.718997 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.719019 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.719045 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.719064 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:29Z","lastTransitionTime":"2026-01-22T10:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.822126 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.822161 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.822172 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.822184 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.822192 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:29Z","lastTransitionTime":"2026-01-22T10:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.925234 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.925288 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.925299 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.925311 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:29 crc kubenswrapper[4752]: I0122 10:26:29.925322 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:29Z","lastTransitionTime":"2026-01-22T10:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.028492 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.028572 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.028598 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.028668 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.028692 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:30Z","lastTransitionTime":"2026-01-22T10:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.091633 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 13:35:00.324907703 +0000 UTC Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.096977 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:30 crc kubenswrapper[4752]: E0122 10:26:30.097120 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69crw" podUID="6bbb033b-8d31-4200-b77f-4910b5170085" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.131534 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.131590 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.131605 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.131625 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.131640 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:30Z","lastTransitionTime":"2026-01-22T10:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.234378 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.234419 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.234430 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.234445 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.234457 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:30Z","lastTransitionTime":"2026-01-22T10:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.337082 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.337170 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.337194 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.337225 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.337251 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:30Z","lastTransitionTime":"2026-01-22T10:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.439822 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.439943 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.439966 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.439992 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.440011 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:30Z","lastTransitionTime":"2026-01-22T10:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.544560 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.544627 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.544651 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.544682 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.544705 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:30Z","lastTransitionTime":"2026-01-22T10:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.648804 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.648882 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.648899 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.648928 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.648943 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:30Z","lastTransitionTime":"2026-01-22T10:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.751377 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.751456 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.751479 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.751508 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.751537 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:30Z","lastTransitionTime":"2026-01-22T10:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.853971 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.854024 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.854043 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.854065 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.854083 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:30Z","lastTransitionTime":"2026-01-22T10:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.955945 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.956023 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.956050 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.956083 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:30 crc kubenswrapper[4752]: I0122 10:26:30.956108 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:30Z","lastTransitionTime":"2026-01-22T10:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.059374 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.059414 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.059423 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.059439 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.059448 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:31Z","lastTransitionTime":"2026-01-22T10:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.092262 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 18:38:12.936376515 +0000 UTC Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.097656 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.097754 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:26:31 crc kubenswrapper[4752]: E0122 10:26:31.098008 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.098062 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:26:31 crc kubenswrapper[4752]: E0122 10:26:31.098122 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:26:31 crc kubenswrapper[4752]: E0122 10:26:31.098182 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.134070 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:31Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.152846 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed11cd28-7c69-40fe-b189-6a9e3fdb1b40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60937db9f170129131f4a1a57506ef4a7531fa730e907c9e9e0fa47365c89441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191c5edbba8a6c821307be508d3dd5506f5866426d9c71935d329c1f50500c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d47441904b6b8d8f817e2d47598f828c18f1fd8d104c4e5c87e82d242794f570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a0f8deef98dbdcc1c90cc6be06afc57e83bd9ce88d0575c90ff328a0de3a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1a0f8deef98dbdcc1c90cc6be06afc57e83bd9ce88d0575c90ff328a0de3a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:31Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.163038 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.163073 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.163083 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.163098 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.163110 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:31Z","lastTransitionTime":"2026-01-22T10:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.170609 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:31Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.185260 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:31Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.198849 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-69crw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bbb033b-8d31-4200-b77f-4910b5170085\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmmq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmmq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:26:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-69crw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:31Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.214683 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:31Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.228580 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:31Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.240387 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:31Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.254869 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:31Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.265725 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.265868 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.265881 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.265895 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.266011 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:31Z","lastTransitionTime":"2026-01-22T10:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.265950 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:31Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.287683 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:26:19Z\\\",\\\"message\\\":\\\"ne-api/machine-api-operator per-node LB for network=default: []services.LB{}\\\\nI0122 10:26:18.910717 6437 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nF0122 10:26:18.910724 6437 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:18Z is after 2025-08-24T17:21:41Z]\\\\nI0122 10:26:18.910734 6437 services_controller.go:454] Service openshift-machine-api/machine-api-operator for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0122 10:26:18.910705\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:26:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-784rk_openshift-ovn-kubernetes(bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:31Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.298253 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6v582" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db1e85a-38c2-41d7-8b2e-684b64e813be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bec36dff2cdfff4610daf95f1806e47f0a275b67df23f36e8dd4363baf29e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mj7kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6v582\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:31Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.314051 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://198c87bacd03bbc21825aea034abc4d990d941e5869fd32afc17a944185aac93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:31Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.324713 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea20a00a-de56-4ce1-b008-6ebe5ba07354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c8f23b87fd82babb5fe7aabecbe891c628609aadb4ca645bcdfc5d60593e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4sscj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da111a1634107b11f9dc8909d6e65cd6ccaa586d556262db6f9e72b594cdb511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4sscj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:31Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.335906 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:31Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.349267 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:31Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.363004 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:31Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.368312 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.368336 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.368344 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.368357 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.368366 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:31Z","lastTransitionTime":"2026-01-22T10:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.374127 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d1f73a22b61b7214f6d3f60b924dd1f2c82ecafa6190f3dcdceceea8d00ece0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:31Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.470209 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.470253 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.470265 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.470284 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.470295 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:31Z","lastTransitionTime":"2026-01-22T10:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.572143 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.572202 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.572221 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.572244 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.572261 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:31Z","lastTransitionTime":"2026-01-22T10:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.674496 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.674577 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.674601 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.674630 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.674649 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:31Z","lastTransitionTime":"2026-01-22T10:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.777285 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.777322 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.777333 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.777349 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.777361 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:31Z","lastTransitionTime":"2026-01-22T10:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.881847 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.881919 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.881929 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.881942 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.881969 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:31Z","lastTransitionTime":"2026-01-22T10:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.984499 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.984552 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.984569 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.984592 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:31 crc kubenswrapper[4752]: I0122 10:26:31.984609 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:31Z","lastTransitionTime":"2026-01-22T10:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.088021 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.088111 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.088135 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.088165 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.088186 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:32Z","lastTransitionTime":"2026-01-22T10:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.093413 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 11:42:08.579367729 +0000 UTC Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.096828 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:32 crc kubenswrapper[4752]: E0122 10:26:32.097225 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69crw" podUID="6bbb033b-8d31-4200-b77f-4910b5170085" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.190762 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.190806 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.190821 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.190839 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.190851 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:32Z","lastTransitionTime":"2026-01-22T10:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.293684 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.293738 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.293750 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.293767 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.293779 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:32Z","lastTransitionTime":"2026-01-22T10:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.395652 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.395693 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.395701 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.395715 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.395724 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:32Z","lastTransitionTime":"2026-01-22T10:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.498229 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.498561 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.498642 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.498752 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.498826 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:32Z","lastTransitionTime":"2026-01-22T10:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.601690 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.601753 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.601774 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.601799 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.601816 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:32Z","lastTransitionTime":"2026-01-22T10:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.704475 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.704524 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.704540 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.704559 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.704573 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:32Z","lastTransitionTime":"2026-01-22T10:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.772799 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6bbb033b-8d31-4200-b77f-4910b5170085-metrics-certs\") pod \"network-metrics-daemon-69crw\" (UID: \"6bbb033b-8d31-4200-b77f-4910b5170085\") " pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:32 crc kubenswrapper[4752]: E0122 10:26:32.772986 4752 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 10:26:32 crc kubenswrapper[4752]: E0122 10:26:32.773034 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bbb033b-8d31-4200-b77f-4910b5170085-metrics-certs podName:6bbb033b-8d31-4200-b77f-4910b5170085 nodeName:}" failed. No retries permitted until 2026-01-22 10:27:04.773019572 +0000 UTC m=+104.002962480 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6bbb033b-8d31-4200-b77f-4910b5170085-metrics-certs") pod "network-metrics-daemon-69crw" (UID: "6bbb033b-8d31-4200-b77f-4910b5170085") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.807163 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.807219 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.807235 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.807257 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.807274 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:32Z","lastTransitionTime":"2026-01-22T10:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.910390 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.910450 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.910469 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.910496 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:32 crc kubenswrapper[4752]: I0122 10:26:32.910514 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:32Z","lastTransitionTime":"2026-01-22T10:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.018949 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.019021 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.019040 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.019063 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.019073 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:33Z","lastTransitionTime":"2026-01-22T10:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.094374 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 06:05:03.76634795 +0000 UTC Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.097902 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.097936 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.097944 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:26:33 crc kubenswrapper[4752]: E0122 10:26:33.098135 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:26:33 crc kubenswrapper[4752]: E0122 10:26:33.098313 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:26:33 crc kubenswrapper[4752]: E0122 10:26:33.098494 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.121784 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.122081 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.122201 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.122349 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.122461 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:33Z","lastTransitionTime":"2026-01-22T10:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.226843 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.227130 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.227339 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.227444 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.227541 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:33Z","lastTransitionTime":"2026-01-22T10:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.330095 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.330520 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.330672 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.330808 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.330966 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:33Z","lastTransitionTime":"2026-01-22T10:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.433383 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.433414 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.433422 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.433435 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.433444 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:33Z","lastTransitionTime":"2026-01-22T10:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.535751 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.535799 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.535811 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.535831 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.535843 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:33Z","lastTransitionTime":"2026-01-22T10:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.557476 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6nmbt_25322265-5a85-4c78-bf60-61836307404e/kube-multus/0.log" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.557519 4752 generic.go:334] "Generic (PLEG): container finished" podID="25322265-5a85-4c78-bf60-61836307404e" containerID="6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e" exitCode=1 Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.557552 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6nmbt" event={"ID":"25322265-5a85-4c78-bf60-61836307404e","Type":"ContainerDied","Data":"6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e"} Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.558056 4752 scope.go:117] "RemoveContainer" containerID="6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.572686 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:33Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.591149 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:33Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.603318 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:33Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.619456 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:33Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.635392 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"message\\\":\\\"2026-01-22T10:25:48+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_73c94845-0a9e-4d84-9c95-d0153ae8543d\\\\n2026-01-22T10:25:48+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_73c94845-0a9e-4d84-9c95-d0153ae8543d to /host/opt/cni/bin/\\\\n2026-01-22T10:25:48Z [verbose] multus-daemon started\\\\n2026-01-22T10:25:48Z [verbose] Readiness Indicator file check\\\\n2026-01-22T10:26:33Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:33Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.637979 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.638020 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.638038 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.638059 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.638076 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:33Z","lastTransitionTime":"2026-01-22T10:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.651484 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:26:19Z\\\",\\\"message\\\":\\\"ne-api/machine-api-operator per-node LB for network=default: []services.LB{}\\\\nI0122 10:26:18.910717 6437 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nF0122 10:26:18.910724 6437 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:18Z is after 2025-08-24T17:21:41Z]\\\\nI0122 10:26:18.910734 6437 services_controller.go:454] Service openshift-machine-api/machine-api-operator for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0122 10:26:18.910705\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:26:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-784rk_openshift-ovn-kubernetes(bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:33Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.661793 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6v582" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db1e85a-38c2-41d7-8b2e-684b64e813be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bec36dff2cdfff4610daf95f1806e47f0a275b67df23f36e8dd4363baf29e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mj7kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6v582\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:33Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.679365 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://198c87bacd03bbc21825aea034abc4d990d941e5869fd32afc17a944185aac93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:33Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.695235 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea20a00a-de56-4ce1-b008-6ebe5ba07354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c8f23b87fd82babb5fe7aabecbe891c628609aadb4ca645bcdfc5d60593e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4sscj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da111a1634107b11f9dc8909d6e65cd6ccaa586d556262db6f9e72b594cdb511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4sscj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:33Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.711318 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:33Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.724500 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:33Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.737317 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:33Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.740316 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.740339 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.740347 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.740361 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.740372 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:33Z","lastTransitionTime":"2026-01-22T10:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.750194 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d1f73a22b61b7214f6d3f60b924dd1f2c82ecafa6190f3dcdceceea8d00ece0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:33Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.768013 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:33Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.783600 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed11cd28-7c69-40fe-b189-6a9e3fdb1b40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60937db9f170129131f4a1a57506ef4a7531fa730e907c9e9e0fa47365c89441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191c5edbba8a6c821307be508d3dd5506f5866426d9c71935d329c1f50500c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d47441904b6b8d8f817e2d47598f828c18f1fd8d104c4e5c87e82d242794f570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a0f8deef98dbdcc1c90cc6be06afc57e83bd9ce88d0575c90ff328a0de3a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1a0f8deef98dbdcc1c90cc6be06afc57e83bd9ce88d0575c90ff328a0de3a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:33Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.801323 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:33Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.817894 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:33Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.830742 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-69crw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bbb033b-8d31-4200-b77f-4910b5170085\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmmq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmmq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:26:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-69crw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:33Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.842343 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.842534 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.842639 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.842767 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.842882 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:33Z","lastTransitionTime":"2026-01-22T10:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.897738 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.897839 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.897869 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.897889 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.897900 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:33Z","lastTransitionTime":"2026-01-22T10:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:33 crc kubenswrapper[4752]: E0122 10:26:33.915837 4752 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6520db-2739-481f-9d91-77c81039e25e\\\",\\\"systemUUID\\\":\\\"d71a021a-a6a8-4801-b0d5-dbfd44512a09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:33Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.919922 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.920086 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.920316 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.920452 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.920558 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:33Z","lastTransitionTime":"2026-01-22T10:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:33 crc kubenswrapper[4752]: E0122 10:26:33.938611 4752 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6520db-2739-481f-9d91-77c81039e25e\\\",\\\"systemUUID\\\":\\\"d71a021a-a6a8-4801-b0d5-dbfd44512a09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:33Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.947148 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.947181 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.947189 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.947204 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.947215 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:33Z","lastTransitionTime":"2026-01-22T10:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:33 crc kubenswrapper[4752]: E0122 10:26:33.969753 4752 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6520db-2739-481f-9d91-77c81039e25e\\\",\\\"systemUUID\\\":\\\"d71a021a-a6a8-4801-b0d5-dbfd44512a09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:33Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.974419 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.974557 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.974679 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.974808 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.974948 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:33Z","lastTransitionTime":"2026-01-22T10:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:33 crc kubenswrapper[4752]: E0122 10:26:33.991263 4752 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6520db-2739-481f-9d91-77c81039e25e\\\",\\\"systemUUID\\\":\\\"d71a021a-a6a8-4801-b0d5-dbfd44512a09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:33Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.998540 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.998852 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.999001 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.999135 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:33 crc kubenswrapper[4752]: I0122 10:26:33.999245 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:33Z","lastTransitionTime":"2026-01-22T10:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:34 crc kubenswrapper[4752]: E0122 10:26:34.013213 4752 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6520db-2739-481f-9d91-77c81039e25e\\\",\\\"systemUUID\\\":\\\"d71a021a-a6a8-4801-b0d5-dbfd44512a09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:34Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:34 crc kubenswrapper[4752]: E0122 10:26:34.013359 4752 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.014810 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.014840 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.014876 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.014895 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.014908 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:34Z","lastTransitionTime":"2026-01-22T10:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.095189 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 05:00:33.871825783 +0000 UTC Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.097644 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:34 crc kubenswrapper[4752]: E0122 10:26:34.097821 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69crw" podUID="6bbb033b-8d31-4200-b77f-4910b5170085" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.099018 4752 scope.go:117] "RemoveContainer" containerID="9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6" Jan 22 10:26:34 crc kubenswrapper[4752]: E0122 10:26:34.099352 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-784rk_openshift-ovn-kubernetes(bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25)\"" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.118258 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.118460 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.118665 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.118919 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.119102 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:34Z","lastTransitionTime":"2026-01-22T10:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.222169 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.222212 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.222228 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.222250 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.222268 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:34Z","lastTransitionTime":"2026-01-22T10:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.325296 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.325357 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.325378 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.325405 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.325428 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:34Z","lastTransitionTime":"2026-01-22T10:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.428714 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.428770 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.428792 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.428818 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.428838 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:34Z","lastTransitionTime":"2026-01-22T10:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.532218 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.532577 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.532726 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.532934 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.533313 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:34Z","lastTransitionTime":"2026-01-22T10:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.563824 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6nmbt_25322265-5a85-4c78-bf60-61836307404e/kube-multus/0.log" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.564033 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6nmbt" event={"ID":"25322265-5a85-4c78-bf60-61836307404e","Type":"ContainerStarted","Data":"88e672fb91a91fc93be8f89f79772ed85c622395fbb531d323002e7240e518c4"} Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.583338 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:34Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.600747 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e672fb91a91fc93be8f89f79772ed85c622395fbb531d323002e7240e518c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"message\\\":\\\"2026-01-22T10:25:48+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_73c94845-0a9e-4d84-9c95-d0153ae8543d\\\\n2026-01-22T10:25:48+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_73c94845-0a9e-4d84-9c95-d0153ae8543d to /host/opt/cni/bin/\\\\n2026-01-22T10:25:48Z [verbose] multus-daemon started\\\\n2026-01-22T10:25:48Z [verbose] Readiness Indicator file check\\\\n2026-01-22T10:26:33Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:34Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.624347 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:26:19Z\\\",\\\"message\\\":\\\"ne-api/machine-api-operator per-node LB for network=default: []services.LB{}\\\\nI0122 10:26:18.910717 6437 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nF0122 10:26:18.910724 6437 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:18Z is after 2025-08-24T17:21:41Z]\\\\nI0122 10:26:18.910734 6437 services_controller.go:454] Service openshift-machine-api/machine-api-operator for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0122 10:26:18.910705\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:26:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-784rk_openshift-ovn-kubernetes(bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:34Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.636298 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.636531 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.636820 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.637039 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.637179 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:34Z","lastTransitionTime":"2026-01-22T10:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.641173 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6v582" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db1e85a-38c2-41d7-8b2e-684b64e813be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bec36dff2cdfff4610daf95f1806e47f0a275b67df23f36e8dd4363baf29e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mj7kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6v582\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:34Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.661161 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea20a00a-de56-4ce1-b008-6ebe5ba07354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c8f23b87fd82babb5fe7aabecbe891c628609aadb4ca645bcdfc5d60593e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4sscj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da111a1634107b11f9dc8909d6e65cd6ccaa586d556262db6f9e72b594cdb511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4sscj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:34Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.679016 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:34Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.698420 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:34Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.715328 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:34Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.730664 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d1f73a22b61b7214f6d3f60b924dd1f2c82ecafa6190f3dcdceceea8d00ece0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:34Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.739759 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.739943 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.739967 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.739998 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.740021 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:34Z","lastTransitionTime":"2026-01-22T10:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.750106 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://198c87bacd03bbc21825aea034abc4d990d941e5869fd32afc17a944185aac93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:34Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.784429 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:34Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.803293 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed11cd28-7c69-40fe-b189-6a9e3fdb1b40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60937db9f170129131f4a1a57506ef4a7531fa730e907c9e9e0fa47365c89441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191c5edbba8a6c821307be508d3dd5506f5866426d9c71935d329c1f50500c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d47441904b6b8d8f817e2d47598f828c18f1fd8d104c4e5c87e82d242794f570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a0f8deef98dbdcc1c90cc6be06afc57e83bd9ce88d0575c90ff328a0de3a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1a0f8deef98dbdcc1c90cc6be06afc57e83bd9ce88d0575c90ff328a0de3a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:34Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.824176 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:34Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.842060 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:34Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.842836 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.842892 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.842906 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.842926 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.842942 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:34Z","lastTransitionTime":"2026-01-22T10:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.856021 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-69crw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bbb033b-8d31-4200-b77f-4910b5170085\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmmq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmmq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:26:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-69crw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:34Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.871008 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:34Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.888788 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:34Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.902230 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:34Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.945888 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.945931 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.945942 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.945983 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:34 crc kubenswrapper[4752]: I0122 10:26:34.945995 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:34Z","lastTransitionTime":"2026-01-22T10:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.049215 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.049258 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.049269 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.049286 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.049297 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:35Z","lastTransitionTime":"2026-01-22T10:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.096009 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 23:21:43.46406664 +0000 UTC Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.098106 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:26:35 crc kubenswrapper[4752]: E0122 10:26:35.098279 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.098663 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:26:35 crc kubenswrapper[4752]: E0122 10:26:35.098975 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.099150 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:26:35 crc kubenswrapper[4752]: E0122 10:26:35.099385 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.153017 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.153076 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.153088 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.153109 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.153122 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:35Z","lastTransitionTime":"2026-01-22T10:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.256846 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.256919 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.256935 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.256957 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.256974 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:35Z","lastTransitionTime":"2026-01-22T10:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.361131 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.361184 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.361199 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.361216 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.361230 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:35Z","lastTransitionTime":"2026-01-22T10:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.464264 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.464313 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.464327 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.464345 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.464358 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:35Z","lastTransitionTime":"2026-01-22T10:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.567117 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.567222 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.567247 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.567280 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.567307 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:35Z","lastTransitionTime":"2026-01-22T10:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.670813 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.670927 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.670944 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.670966 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.670984 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:35Z","lastTransitionTime":"2026-01-22T10:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.773351 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.773448 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.773467 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.773497 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.773516 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:35Z","lastTransitionTime":"2026-01-22T10:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.877051 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.877126 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.877147 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.877173 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.877191 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:35Z","lastTransitionTime":"2026-01-22T10:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.980833 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.980914 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.980930 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.980953 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:35 crc kubenswrapper[4752]: I0122 10:26:35.980969 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:35Z","lastTransitionTime":"2026-01-22T10:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.083949 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.084895 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.084930 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.084962 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.084987 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:36Z","lastTransitionTime":"2026-01-22T10:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.096656 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 14:52:25.042331433 +0000 UTC Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.096817 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:36 crc kubenswrapper[4752]: E0122 10:26:36.097023 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69crw" podUID="6bbb033b-8d31-4200-b77f-4910b5170085" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.187646 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.187752 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.187774 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.187798 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.187816 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:36Z","lastTransitionTime":"2026-01-22T10:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.290544 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.290609 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.290626 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.290655 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.290679 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:36Z","lastTransitionTime":"2026-01-22T10:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.393955 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.393995 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.394005 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.394022 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.394035 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:36Z","lastTransitionTime":"2026-01-22T10:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.496594 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.496627 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.496636 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.496669 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.496679 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:36Z","lastTransitionTime":"2026-01-22T10:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.598447 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.598506 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.598518 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.598534 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.598544 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:36Z","lastTransitionTime":"2026-01-22T10:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.701268 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.701346 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.701370 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.701401 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.701427 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:36Z","lastTransitionTime":"2026-01-22T10:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.805050 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.805413 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.805561 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.805700 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.805814 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:36Z","lastTransitionTime":"2026-01-22T10:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.909052 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.909105 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.909117 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.909139 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:36 crc kubenswrapper[4752]: I0122 10:26:36.909159 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:36Z","lastTransitionTime":"2026-01-22T10:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.012606 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.012679 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.012704 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.012734 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.012758 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:37Z","lastTransitionTime":"2026-01-22T10:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.097633 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 16:29:56.24562983 +0000 UTC Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.097827 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.097888 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.097909 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:26:37 crc kubenswrapper[4752]: E0122 10:26:37.098152 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:26:37 crc kubenswrapper[4752]: E0122 10:26:37.098355 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:26:37 crc kubenswrapper[4752]: E0122 10:26:37.098482 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.116270 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.116655 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.116791 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.116979 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.117106 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:37Z","lastTransitionTime":"2026-01-22T10:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.220920 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.220984 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.221005 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.221041 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.221059 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:37Z","lastTransitionTime":"2026-01-22T10:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.324463 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.324809 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.324950 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.325067 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.325168 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:37Z","lastTransitionTime":"2026-01-22T10:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.429026 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.429253 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.429363 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.429461 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.429542 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:37Z","lastTransitionTime":"2026-01-22T10:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.532372 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.532415 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.532426 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.532442 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.532453 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:37Z","lastTransitionTime":"2026-01-22T10:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.634996 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.635068 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.635091 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.635123 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.635145 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:37Z","lastTransitionTime":"2026-01-22T10:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.739110 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.739484 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.739780 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.740106 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.740272 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:37Z","lastTransitionTime":"2026-01-22T10:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.843003 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.843037 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.843047 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.843059 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.843066 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:37Z","lastTransitionTime":"2026-01-22T10:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.945659 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.945717 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.945734 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.945758 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:37 crc kubenswrapper[4752]: I0122 10:26:37.945776 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:37Z","lastTransitionTime":"2026-01-22T10:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.049472 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.049552 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.049573 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.049606 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.049627 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:38Z","lastTransitionTime":"2026-01-22T10:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.097833 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:38 crc kubenswrapper[4752]: E0122 10:26:38.098141 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69crw" podUID="6bbb033b-8d31-4200-b77f-4910b5170085" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.098225 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 15:07:29.136695342 +0000 UTC Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.152502 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.152597 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.152613 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.152635 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.152649 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:38Z","lastTransitionTime":"2026-01-22T10:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.255560 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.255688 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.255718 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.255747 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.255832 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:38Z","lastTransitionTime":"2026-01-22T10:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.358964 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.358987 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.359013 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.359027 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.359037 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:38Z","lastTransitionTime":"2026-01-22T10:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.462080 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.462122 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.462132 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.462162 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.462173 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:38Z","lastTransitionTime":"2026-01-22T10:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.564839 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.565170 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.565345 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.565774 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.566011 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:38Z","lastTransitionTime":"2026-01-22T10:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.668440 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.668500 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.668523 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.668551 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.668569 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:38Z","lastTransitionTime":"2026-01-22T10:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.771712 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.771746 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.771756 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.771770 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.771780 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:38Z","lastTransitionTime":"2026-01-22T10:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.874335 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.874388 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.874410 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.874438 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.874460 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:38Z","lastTransitionTime":"2026-01-22T10:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.976655 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.976685 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.976694 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.976707 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:38 crc kubenswrapper[4752]: I0122 10:26:38.976717 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:38Z","lastTransitionTime":"2026-01-22T10:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.079409 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.079479 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.079503 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.079530 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.079552 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:39Z","lastTransitionTime":"2026-01-22T10:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.097633 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.097632 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.098032 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:26:39 crc kubenswrapper[4752]: E0122 10:26:39.098252 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:26:39 crc kubenswrapper[4752]: E0122 10:26:39.098380 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.098431 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 21:49:24.040997493 +0000 UTC Jan 22 10:26:39 crc kubenswrapper[4752]: E0122 10:26:39.099156 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.182787 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.182896 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.182922 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.182950 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.182970 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:39Z","lastTransitionTime":"2026-01-22T10:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.286582 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.286636 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.286657 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.286689 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.286711 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:39Z","lastTransitionTime":"2026-01-22T10:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.388775 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.388814 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.388822 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.388835 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.388844 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:39Z","lastTransitionTime":"2026-01-22T10:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.495621 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.496018 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.496196 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.496395 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.496567 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:39Z","lastTransitionTime":"2026-01-22T10:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.599753 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.599821 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.599841 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.599910 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.599966 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:39Z","lastTransitionTime":"2026-01-22T10:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.703320 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.703378 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.703394 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.703417 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.703436 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:39Z","lastTransitionTime":"2026-01-22T10:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.806765 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.806824 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.806842 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.806905 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.806928 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:39Z","lastTransitionTime":"2026-01-22T10:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.910516 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.910574 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.910590 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.910613 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:39 crc kubenswrapper[4752]: I0122 10:26:39.910631 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:39Z","lastTransitionTime":"2026-01-22T10:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.013746 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.013789 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.013819 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.013838 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.013849 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:40Z","lastTransitionTime":"2026-01-22T10:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.097390 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:40 crc kubenswrapper[4752]: E0122 10:26:40.097614 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69crw" podUID="6bbb033b-8d31-4200-b77f-4910b5170085" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.099468 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 23:31:19.649200369 +0000 UTC Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.116892 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.116955 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.116977 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.117000 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.117017 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:40Z","lastTransitionTime":"2026-01-22T10:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.219627 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.219671 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.219682 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.219699 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.219711 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:40Z","lastTransitionTime":"2026-01-22T10:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.322215 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.322513 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.322586 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.322655 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.322755 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:40Z","lastTransitionTime":"2026-01-22T10:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.426174 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.426531 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.426717 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.426893 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.427075 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:40Z","lastTransitionTime":"2026-01-22T10:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.529983 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.530018 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.530027 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.530042 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.530053 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:40Z","lastTransitionTime":"2026-01-22T10:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.632158 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.632195 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.632206 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.632220 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.632231 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:40Z","lastTransitionTime":"2026-01-22T10:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.734710 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.734741 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.734748 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.734762 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.734770 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:40Z","lastTransitionTime":"2026-01-22T10:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.837093 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.837484 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.837818 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.838194 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.838550 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:40Z","lastTransitionTime":"2026-01-22T10:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.941325 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.941707 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.941917 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.942107 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:40 crc kubenswrapper[4752]: I0122 10:26:40.942265 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:40Z","lastTransitionTime":"2026-01-22T10:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.044710 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.045091 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.045251 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.045375 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.045489 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:41Z","lastTransitionTime":"2026-01-22T10:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.098147 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.098206 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:26:41 crc kubenswrapper[4752]: E0122 10:26:41.098472 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:26:41 crc kubenswrapper[4752]: E0122 10:26:41.098654 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.098774 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:26:41 crc kubenswrapper[4752]: E0122 10:26:41.099043 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.100068 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 03:35:01.153373498 +0000 UTC Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.123077 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"895510e8-215e-440a-9096-85250959be30\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046d6fc029aa0f43a88d15d7ec932e71f452d4164e64a1ada0a473bff0c2526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3bcca0f615cf2cb4277800f1dc6c9d925e0b9eb06f6f4764c46b60c451c13b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33b5907b5df2aa4f7a9887dd8ff9c8e53ba5fb00974e6613eabd613325c2de8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:41Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.143765 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:41Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.149124 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.149177 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.149196 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.149224 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.149241 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:41Z","lastTransitionTime":"2026-01-22T10:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.162494 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prdjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01ba31a2-a4da-4736-8b30-1c4cf57e39fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f54dcb35f5e42cb57c13f16ffab324fd591165587cdb04a9c89ebce129005d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dk788\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prdjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:41Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.187093 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5500f584-6d10-4bf4-8ec6-98157d49828c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 10:25:33.259435 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 10:25:33.260234 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1775104225/tls.crt::/tmp/serving-cert-1775104225/tls.key\\\\\\\"\\\\nI0122 10:25:39.052396 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 10:25:39.057698 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 10:25:39.057733 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 10:25:39.057809 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 10:25:39.057824 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 10:25:39.067332 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 10:25:39.067481 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067533 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 10:25:39.067581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 10:25:39.067626 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 10:25:39.067689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 10:25:39.067735 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 10:25:39.067967 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 10:25:39.071110 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:41Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.211005 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6nmbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25322265-5a85-4c78-bf60-61836307404e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88e672fb91a91fc93be8f89f79772ed85c622395fbb531d323002e7240e518c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:26:33Z\\\",\\\"message\\\":\\\"2026-01-22T10:25:48+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_73c94845-0a9e-4d84-9c95-d0153ae8543d\\\\n2026-01-22T10:25:48+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_73c94845-0a9e-4d84-9c95-d0153ae8543d to /host/opt/cni/bin/\\\\n2026-01-22T10:25:48Z [verbose] multus-daemon started\\\\n2026-01-22T10:25:48Z [verbose] Readiness Indicator file check\\\\n2026-01-22T10:26:33Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:26:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ww26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6nmbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:41Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.238271 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T10:26:19Z\\\",\\\"message\\\":\\\"ne-api/machine-api-operator per-node LB for network=default: []services.LB{}\\\\nI0122 10:26:18.910717 6437 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nF0122 10:26:18.910724 6437 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:18Z is after 2025-08-24T17:21:41Z]\\\\nI0122 10:26:18.910734 6437 services_controller.go:454] Service openshift-machine-api/machine-api-operator for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0122 10:26:18.910705\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T10:26:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-784rk_openshift-ovn-kubernetes(bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb6gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-784rk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:41Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.251568 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6v582" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db1e85a-38c2-41d7-8b2e-684b64e813be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bec36dff2cdfff4610daf95f1806e47f0a275b67df23f36e8dd4363baf29e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mj7kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6v582\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:41Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.252921 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.252955 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.252963 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.252980 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.252989 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:41Z","lastTransitionTime":"2026-01-22T10:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.261993 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb8df70c-9474-4827-8831-f39fc6883d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d1f73a22b61b7214f6d3f60b924dd1f2c82ecafa6190f3dcdceceea8d00ece0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffsqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6hm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:41Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.274616 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8271e9d0-84de-47c5-82bb-35fd1af29e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://198c87bacd03bbc21825aea034abc4d990d941e5869fd32afc17a944185aac93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82587b7ddd2baa41f58c43796ac6e35f39ea13b408c8f30cc6881e5b41660552\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2824b8357817d013d36750e4023fb6e24c2b827ed2b8af1b3ce14390ff84cffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d419682e1eaae598685d4c798323907000d572147d06e926591beaa5b30049d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66fd8d67af062a92db4a20432c0bfabb5bf3fd350e78ca4c5ff62bf1302137ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ed38468f4de6ef9270d7116452e23d1f0168c6cf40f5ec89b0a6383615ab556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d76e5bd95398c23ebc39c8188daedee4d9b187a1d20aa39eabe2db8d63012\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnnl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6pbrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:41Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.285046 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea20a00a-de56-4ce1-b008-6ebe5ba07354\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8c8f23b87fd82babb5fe7aabecbe891c628609aadb4ca645bcdfc5d60593e2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4sscj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da111a1634107b11f9dc8909d6e65cd6ccaa586d556262db6f9e72b594cdb511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4sscj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:41Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.298613 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d339344bce0273ab6f82595683130e1a125861cbc26dc6c4bbd6176310b1e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:41Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.313078 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bb42818f20ceddd6334403636d19fb7ebef795b39761e3c2db6ef0e82d85b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baed3fa67197030828ba227b801f7ca09ad856f717454dcd4d822d5a7b63a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:41Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.324354 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:41Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.346330 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-69crw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bbb033b-8d31-4200-b77f-4910b5170085\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmmq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmmq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:26:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-69crw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:41Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.355964 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.356007 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.356019 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.356034 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.356044 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:41Z","lastTransitionTime":"2026-01-22T10:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.364902 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6c298a6-b9c7-4962-b864-82ae0776bce7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://914069b0303866ce912eda3435e372357db87b25f295e26b9c56ff3bed01cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f6442bf9e495d56fdd0933d354c4e9e91b0e336e762ef02bc7de6b7494e7c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632a38cf7ce3e9e01de8389249c5648b6268c44d28ebe92f9d5d67f52270e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc814062629efb808892b644611a9b160ff318ee39b773cba127b50acd29a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0976895d9c8e57f390c5ed62753b28a1a7c8ebdfee0634ff87260d82442fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865b2485fc52561f8083ec27ccf4514c0b40568c89bc87c6cfd52166b05b0cac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5eab5684295d70e906650d340a1d73f2eadc68ee572d61b64962ddc55035fbc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da012f4f38b0813ad9694e02f44edcd1f123fe43234f54ba1bd299ee744d4049\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:41Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.380345 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed11cd28-7c69-40fe-b189-6a9e3fdb1b40\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:26:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60937db9f170129131f4a1a57506ef4a7531fa730e907c9e9e0fa47365c89441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191c5edbba8a6c821307be508d3dd5506f5866426d9c71935d329c1f50500c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d47441904b6b8d8f817e2d47598f828c18f1fd8d104c4e5c87e82d242794f570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1a0f8deef98dbdcc1c90cc6be06afc57e83bd9ce88d0575c90ff328a0de3a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1a0f8deef98dbdcc1c90cc6be06afc57e83bd9ce88d0575c90ff328a0de3a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T10:25:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T10:25:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T10:25:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:41Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.400276 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44e71fd99952235499416f47d03392cba31fba8b4104ba1f116f8e69191bcbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T10:25:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:41Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.416916 4752 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T10:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T10:26:41Z is after 2025-08-24T17:21:41Z" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.458891 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.458953 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.458971 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.458995 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.459013 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:41Z","lastTransitionTime":"2026-01-22T10:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.561944 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.562020 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.562033 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.562047 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.562056 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:41Z","lastTransitionTime":"2026-01-22T10:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.664738 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.664790 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.664809 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.664830 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.664845 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:41Z","lastTransitionTime":"2026-01-22T10:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.767276 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.767349 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.767368 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.767394 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.767412 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:41Z","lastTransitionTime":"2026-01-22T10:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.870257 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.870336 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.870364 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.870394 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.870415 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:41Z","lastTransitionTime":"2026-01-22T10:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.973014 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.973077 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.973094 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.973119 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:41 crc kubenswrapper[4752]: I0122 10:26:41.973141 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:41Z","lastTransitionTime":"2026-01-22T10:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.076211 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.076277 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.076295 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.076320 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.076337 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:42Z","lastTransitionTime":"2026-01-22T10:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.097151 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:42 crc kubenswrapper[4752]: E0122 10:26:42.097333 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69crw" podUID="6bbb033b-8d31-4200-b77f-4910b5170085" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.100572 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 17:20:46.454337402 +0000 UTC Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.179443 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.179513 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.179538 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.179565 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.179586 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:42Z","lastTransitionTime":"2026-01-22T10:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.282018 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.282071 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.282081 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.282100 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.282111 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:42Z","lastTransitionTime":"2026-01-22T10:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.384454 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.384497 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.384510 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.384528 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.384541 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:42Z","lastTransitionTime":"2026-01-22T10:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.487101 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.487138 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.487149 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.487163 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.487175 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:42Z","lastTransitionTime":"2026-01-22T10:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.589703 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.589744 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.589757 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.589774 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.589788 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:42Z","lastTransitionTime":"2026-01-22T10:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.692568 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.692610 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.692623 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.692638 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.692650 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:42Z","lastTransitionTime":"2026-01-22T10:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.796257 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.796304 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.796315 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.796333 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.796346 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:42Z","lastTransitionTime":"2026-01-22T10:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.898116 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.898143 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.898152 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.898164 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.898174 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:42Z","lastTransitionTime":"2026-01-22T10:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.992503 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:42 crc kubenswrapper[4752]: E0122 10:26:42.992724 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:27:46.992673689 +0000 UTC m=+146.222616637 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:42 crc kubenswrapper[4752]: I0122 10:26:42.992941 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:26:42 crc kubenswrapper[4752]: E0122 10:26:42.993250 4752 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 10:26:42 crc kubenswrapper[4752]: E0122 10:26:42.993362 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 10:27:46.993335396 +0000 UTC m=+146.223278344 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.000321 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.000389 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.000412 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.000441 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.000462 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:43Z","lastTransitionTime":"2026-01-22T10:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.094174 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.094251 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:26:43 crc kubenswrapper[4752]: E0122 10:26:43.094282 4752 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.094289 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:26:43 crc kubenswrapper[4752]: E0122 10:26:43.094362 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 10:27:47.094338183 +0000 UTC m=+146.324281111 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 10:26:43 crc kubenswrapper[4752]: E0122 10:26:43.094401 4752 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 10:26:43 crc kubenswrapper[4752]: E0122 10:26:43.094465 4752 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 10:26:43 crc kubenswrapper[4752]: E0122 10:26:43.094470 4752 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 10:26:43 crc kubenswrapper[4752]: E0122 10:26:43.094504 4752 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 10:26:43 crc kubenswrapper[4752]: E0122 10:26:43.094513 4752 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:26:43 crc kubenswrapper[4752]: E0122 10:26:43.094561 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 10:27:47.094545329 +0000 UTC m=+146.324488237 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:26:43 crc kubenswrapper[4752]: E0122 10:26:43.094490 4752 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:26:43 crc kubenswrapper[4752]: E0122 10:26:43.094624 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 10:27:47.09461148 +0000 UTC m=+146.324554478 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.097222 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.097266 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.097331 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:26:43 crc kubenswrapper[4752]: E0122 10:26:43.097333 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:26:43 crc kubenswrapper[4752]: E0122 10:26:43.097471 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:26:43 crc kubenswrapper[4752]: E0122 10:26:43.097528 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.100825 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 20:08:01.013559193 +0000 UTC Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.102351 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.102423 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.102443 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.102516 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.102533 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:43Z","lastTransitionTime":"2026-01-22T10:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.205184 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.205265 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.205299 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.205327 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.205349 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:43Z","lastTransitionTime":"2026-01-22T10:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.309249 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.309319 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.309339 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.309363 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.309382 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:43Z","lastTransitionTime":"2026-01-22T10:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.412549 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.412621 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.412643 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.412667 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.412683 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:43Z","lastTransitionTime":"2026-01-22T10:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.514985 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.515044 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.515065 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.515091 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.515112 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:43Z","lastTransitionTime":"2026-01-22T10:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.617500 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.617568 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.617589 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.617617 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.617639 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:43Z","lastTransitionTime":"2026-01-22T10:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.720935 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.721001 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.721025 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.721054 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.721075 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:43Z","lastTransitionTime":"2026-01-22T10:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.824064 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.824148 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.824168 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.824198 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.824221 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:43Z","lastTransitionTime":"2026-01-22T10:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.926510 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.926587 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.926612 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.926644 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:43 crc kubenswrapper[4752]: I0122 10:26:43.926662 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:43Z","lastTransitionTime":"2026-01-22T10:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.029746 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.029802 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.029821 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.029846 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.029895 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:44Z","lastTransitionTime":"2026-01-22T10:26:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.097292 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:44 crc kubenswrapper[4752]: E0122 10:26:44.097623 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69crw" podUID="6bbb033b-8d31-4200-b77f-4910b5170085" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.101767 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 03:11:52.52176344 +0000 UTC Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.132564 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.132648 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.132667 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.132690 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.132708 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:44Z","lastTransitionTime":"2026-01-22T10:26:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.235726 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.235799 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.235826 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.235886 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.235911 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:44Z","lastTransitionTime":"2026-01-22T10:26:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.339055 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.339107 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.339124 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.339147 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.339166 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:44Z","lastTransitionTime":"2026-01-22T10:26:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.352035 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.352083 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.352099 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.352120 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.352135 4752 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T10:26:44Z","lastTransitionTime":"2026-01-22T10:26:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.419015 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtx5t"] Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.420015 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtx5t" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.422010 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.422299 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.423997 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.424399 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.450126 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=65.450102025 podStartE2EDuration="1m5.450102025s" podCreationTimestamp="2026-01-22 10:25:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:44.449559951 +0000 UTC m=+83.679502939" watchObservedRunningTime="2026-01-22 10:26:44.450102025 +0000 UTC m=+83.680044963" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.499001 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-prdjr" podStartSLOduration=58.49894156 podStartE2EDuration="58.49894156s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:44.488176199 +0000 UTC m=+83.718119107" watchObservedRunningTime="2026-01-22 10:26:44.49894156 +0000 UTC m=+83.728884508" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.510127 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/627611b4-5176-4990-ae9d-f95a907f72bb-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qtx5t\" (UID: \"627611b4-5176-4990-ae9d-f95a907f72bb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtx5t" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.510517 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/627611b4-5176-4990-ae9d-f95a907f72bb-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qtx5t\" (UID: \"627611b4-5176-4990-ae9d-f95a907f72bb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtx5t" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.510700 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/627611b4-5176-4990-ae9d-f95a907f72bb-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qtx5t\" (UID: \"627611b4-5176-4990-ae9d-f95a907f72bb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtx5t" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.510964 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/627611b4-5176-4990-ae9d-f95a907f72bb-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qtx5t\" (UID: \"627611b4-5176-4990-ae9d-f95a907f72bb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtx5t" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.511185 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/627611b4-5176-4990-ae9d-f95a907f72bb-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qtx5t\" (UID: \"627611b4-5176-4990-ae9d-f95a907f72bb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtx5t" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.517569 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-6v582" podStartSLOduration=58.517543036 podStartE2EDuration="58.517543036s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:44.50007917 +0000 UTC m=+83.730022088" watchObservedRunningTime="2026-01-22 10:26:44.517543036 +0000 UTC m=+83.747485964" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.541577 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-6nmbt" podStartSLOduration=58.541544302 podStartE2EDuration="58.541544302s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:44.539652643 +0000 UTC m=+83.769595561" watchObservedRunningTime="2026-01-22 10:26:44.541544302 +0000 UTC m=+83.771487250" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.542418 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=65.542404585 podStartE2EDuration="1m5.542404585s" podCreationTimestamp="2026-01-22 10:25:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:44.517831993 +0000 UTC m=+83.747774941" watchObservedRunningTime="2026-01-22 10:26:44.542404585 +0000 UTC m=+83.772347533" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.609901 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podStartSLOduration=58.609879296 podStartE2EDuration="58.609879296s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:44.609740452 +0000 UTC m=+83.839683360" watchObservedRunningTime="2026-01-22 10:26:44.609879296 +0000 UTC m=+83.839822204" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.617945 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/627611b4-5176-4990-ae9d-f95a907f72bb-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qtx5t\" (UID: \"627611b4-5176-4990-ae9d-f95a907f72bb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtx5t" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.618036 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/627611b4-5176-4990-ae9d-f95a907f72bb-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qtx5t\" (UID: \"627611b4-5176-4990-ae9d-f95a907f72bb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtx5t" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.618082 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/627611b4-5176-4990-ae9d-f95a907f72bb-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qtx5t\" (UID: \"627611b4-5176-4990-ae9d-f95a907f72bb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtx5t" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.618131 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/627611b4-5176-4990-ae9d-f95a907f72bb-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qtx5t\" (UID: \"627611b4-5176-4990-ae9d-f95a907f72bb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtx5t" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.618163 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/627611b4-5176-4990-ae9d-f95a907f72bb-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qtx5t\" (UID: \"627611b4-5176-4990-ae9d-f95a907f72bb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtx5t" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.619098 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/627611b4-5176-4990-ae9d-f95a907f72bb-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qtx5t\" (UID: \"627611b4-5176-4990-ae9d-f95a907f72bb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtx5t" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.620232 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/627611b4-5176-4990-ae9d-f95a907f72bb-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qtx5t\" (UID: \"627611b4-5176-4990-ae9d-f95a907f72bb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtx5t" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.620234 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/627611b4-5176-4990-ae9d-f95a907f72bb-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qtx5t\" (UID: \"627611b4-5176-4990-ae9d-f95a907f72bb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtx5t" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.628524 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/627611b4-5176-4990-ae9d-f95a907f72bb-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qtx5t\" (UID: \"627611b4-5176-4990-ae9d-f95a907f72bb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtx5t" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.645111 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/627611b4-5176-4990-ae9d-f95a907f72bb-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qtx5t\" (UID: \"627611b4-5176-4990-ae9d-f95a907f72bb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtx5t" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.654291 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-6pbrv" podStartSLOduration=58.654275205 podStartE2EDuration="58.654275205s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:44.637424065 +0000 UTC m=+83.867366983" watchObservedRunningTime="2026-01-22 10:26:44.654275205 +0000 UTC m=+83.884218113" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.667918 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxjjx" podStartSLOduration=58.66789398 podStartE2EDuration="58.66789398s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:44.655037865 +0000 UTC m=+83.884980773" watchObservedRunningTime="2026-01-22 10:26:44.66789398 +0000 UTC m=+83.897836888" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.739051 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtx5t" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.758110 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=65.75790163 podStartE2EDuration="1m5.75790163s" podCreationTimestamp="2026-01-22 10:25:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:44.752674624 +0000 UTC m=+83.982617532" watchObservedRunningTime="2026-01-22 10:26:44.75790163 +0000 UTC m=+83.987844628" Jan 22 10:26:44 crc kubenswrapper[4752]: I0122 10:26:44.788074 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=34.788052667 podStartE2EDuration="34.788052667s" podCreationTimestamp="2026-01-22 10:26:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:44.774570595 +0000 UTC m=+84.004513503" watchObservedRunningTime="2026-01-22 10:26:44.788052667 +0000 UTC m=+84.017995575" Jan 22 10:26:45 crc kubenswrapper[4752]: I0122 10:26:45.098095 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:26:45 crc kubenswrapper[4752]: E0122 10:26:45.098270 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:26:45 crc kubenswrapper[4752]: I0122 10:26:45.098641 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:26:45 crc kubenswrapper[4752]: I0122 10:26:45.098692 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:26:45 crc kubenswrapper[4752]: E0122 10:26:45.098780 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:26:45 crc kubenswrapper[4752]: E0122 10:26:45.099074 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:26:45 crc kubenswrapper[4752]: I0122 10:26:45.099316 4752 scope.go:117] "RemoveContainer" containerID="9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6" Jan 22 10:26:45 crc kubenswrapper[4752]: I0122 10:26:45.101986 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 22:38:00.009559743 +0000 UTC Jan 22 10:26:45 crc kubenswrapper[4752]: I0122 10:26:45.102070 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 22 10:26:45 crc kubenswrapper[4752]: I0122 10:26:45.113422 4752 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 22 10:26:45 crc kubenswrapper[4752]: I0122 10:26:45.603621 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtx5t" event={"ID":"627611b4-5176-4990-ae9d-f95a907f72bb","Type":"ContainerStarted","Data":"e5bf48f835885f54d123bd6cadd9fd3a5a089c8fae9b8075b9e35663442ca346"} Jan 22 10:26:45 crc kubenswrapper[4752]: I0122 10:26:45.604048 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtx5t" event={"ID":"627611b4-5176-4990-ae9d-f95a907f72bb","Type":"ContainerStarted","Data":"9088f07428e252f858c6f5ef2f9ee81f2cbe81da0e3e8b081541b13d5f9c8772"} Jan 22 10:26:46 crc kubenswrapper[4752]: I0122 10:26:46.097664 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:46 crc kubenswrapper[4752]: E0122 10:26:46.097773 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69crw" podUID="6bbb033b-8d31-4200-b77f-4910b5170085" Jan 22 10:26:46 crc kubenswrapper[4752]: I0122 10:26:46.610712 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-784rk_bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25/ovnkube-controller/2.log" Jan 22 10:26:46 crc kubenswrapper[4752]: I0122 10:26:46.615501 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" event={"ID":"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25","Type":"ContainerStarted","Data":"27fe8ff40aeef358e5e6401f0eaa5dc400d4b43948373661f2a9ba1d45f0ea9e"} Jan 22 10:26:46 crc kubenswrapper[4752]: I0122 10:26:46.616281 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:26:46 crc kubenswrapper[4752]: I0122 10:26:46.620561 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-69crw"] Jan 22 10:26:46 crc kubenswrapper[4752]: I0122 10:26:46.620701 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:46 crc kubenswrapper[4752]: E0122 10:26:46.620904 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69crw" podUID="6bbb033b-8d31-4200-b77f-4910b5170085" Jan 22 10:26:46 crc kubenswrapper[4752]: I0122 10:26:46.682057 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" podStartSLOduration=60.682034358 podStartE2EDuration="1m0.682034358s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:46.67674264 +0000 UTC m=+85.906685558" watchObservedRunningTime="2026-01-22 10:26:46.682034358 +0000 UTC m=+85.911977266" Jan 22 10:26:46 crc kubenswrapper[4752]: I0122 10:26:46.682346 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qtx5t" podStartSLOduration=60.682341806 podStartE2EDuration="1m0.682341806s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:46.642437734 +0000 UTC m=+85.872380652" watchObservedRunningTime="2026-01-22 10:26:46.682341806 +0000 UTC m=+85.912284714" Jan 22 10:26:47 crc kubenswrapper[4752]: I0122 10:26:47.097656 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:26:47 crc kubenswrapper[4752]: I0122 10:26:47.097989 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:26:47 crc kubenswrapper[4752]: E0122 10:26:47.098017 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 10:26:47 crc kubenswrapper[4752]: E0122 10:26:47.098169 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 10:26:47 crc kubenswrapper[4752]: I0122 10:26:47.098396 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:26:47 crc kubenswrapper[4752]: E0122 10:26:47.098653 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 10:26:47 crc kubenswrapper[4752]: I0122 10:26:47.114217 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 22 10:26:48 crc kubenswrapper[4752]: I0122 10:26:48.097612 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:48 crc kubenswrapper[4752]: E0122 10:26:48.097779 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-69crw" podUID="6bbb033b-8d31-4200-b77f-4910b5170085" Jan 22 10:26:48 crc kubenswrapper[4752]: I0122 10:26:48.979069 4752 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 22 10:26:48 crc kubenswrapper[4752]: I0122 10:26:48.979202 4752 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.036013 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-g476l"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.036722 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-g476l" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.037089 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2ct5b"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.038106 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2ct5b" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.039774 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.040840 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.043490 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-jh6kp"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.043942 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.052081 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: W0122 10:26:49.052786 4752 reflector.go:561] object-"openshift-console"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 22 10:26:49 crc kubenswrapper[4752]: E0122 10:26:49.052829 4752 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 10:26:49 crc kubenswrapper[4752]: W0122 10:26:49.052920 4752 reflector.go:561] object-"openshift-console"/"oauth-serving-cert": failed to list *v1.ConfigMap: configmaps "oauth-serving-cert" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 22 10:26:49 crc kubenswrapper[4752]: E0122 10:26:49.052933 4752 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"oauth-serving-cert\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"oauth-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.053144 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: W0122 10:26:49.053314 4752 reflector.go:561] object-"openshift-console"/"console-config": failed to list *v1.ConfigMap: configmaps "console-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 22 10:26:49 crc kubenswrapper[4752]: E0122 10:26:49.053334 4752 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"console-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 10:26:49 crc kubenswrapper[4752]: W0122 10:26:49.053370 4752 reflector.go:561] object-"openshift-console"/"console-serving-cert": failed to list *v1.Secret: secrets "console-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 22 10:26:49 crc kubenswrapper[4752]: E0122 10:26:49.053380 4752 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"console-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.053559 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 22 10:26:49 crc kubenswrapper[4752]: W0122 10:26:49.053662 4752 reflector.go:561] object-"openshift-console"/"service-ca": failed to list *v1.ConfigMap: configmaps "service-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 22 10:26:49 crc kubenswrapper[4752]: E0122 10:26:49.053681 4752 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"service-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"service-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.053714 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: W0122 10:26:49.053810 4752 reflector.go:561] object-"openshift-console"/"console-oauth-config": failed to list *v1.Secret: secrets "console-oauth-config" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 22 10:26:49 crc kubenswrapper[4752]: E0122 10:26:49.053915 4752 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-oauth-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"console-oauth-config\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.054160 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zw6f2"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.054352 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.054542 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.055841 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4t27t"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.056440 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4t27t" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.057738 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.058640 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.058725 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.058728 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.058722 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.067180 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-service-ca\") pod \"console-f9d7485db-jh6kp\" (UID: \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\") " pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.067228 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b26aa16-ebd4-47e8-bc74-c4e5185df358-serving-cert\") pod \"openshift-config-operator-7777fb866f-2ct5b\" (UID: \"5b26aa16-ebd4-47e8-bc74-c4e5185df358\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2ct5b" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.067261 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-oauth-serving-cert\") pod \"console-f9d7485db-jh6kp\" (UID: \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\") " pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.067290 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7794f4fb-e44d-43b1-a6d4-eabcf5ffe671-serving-cert\") pod \"console-operator-58897d9998-g476l\" (UID: \"7794f4fb-e44d-43b1-a6d4-eabcf5ffe671\") " pod="openshift-console-operator/console-operator-58897d9998-g476l" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.067318 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b82cc492-857e-4eaf-8e18-87e830bdc9f6-console-serving-cert\") pod \"console-f9d7485db-jh6kp\" (UID: \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\") " pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.067432 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-trusted-ca-bundle\") pod \"console-f9d7485db-jh6kp\" (UID: \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\") " pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.067650 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/5b26aa16-ebd4-47e8-bc74-c4e5185df358-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2ct5b\" (UID: \"5b26aa16-ebd4-47e8-bc74-c4e5185df358\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2ct5b" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.067707 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj98g\" (UniqueName: \"kubernetes.io/projected/7794f4fb-e44d-43b1-a6d4-eabcf5ffe671-kube-api-access-mj98g\") pod \"console-operator-58897d9998-g476l\" (UID: \"7794f4fb-e44d-43b1-a6d4-eabcf5ffe671\") " pod="openshift-console-operator/console-operator-58897d9998-g476l" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.067743 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-console-config\") pod \"console-f9d7485db-jh6kp\" (UID: \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\") " pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.067876 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfwf9\" (UniqueName: \"kubernetes.io/projected/b82cc492-857e-4eaf-8e18-87e830bdc9f6-kube-api-access-sfwf9\") pod \"console-f9d7485db-jh6kp\" (UID: \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\") " pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.067953 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7794f4fb-e44d-43b1-a6d4-eabcf5ffe671-trusted-ca\") pod \"console-operator-58897d9998-g476l\" (UID: \"7794f4fb-e44d-43b1-a6d4-eabcf5ffe671\") " pod="openshift-console-operator/console-operator-58897d9998-g476l" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.067990 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7794f4fb-e44d-43b1-a6d4-eabcf5ffe671-config\") pod \"console-operator-58897d9998-g476l\" (UID: \"7794f4fb-e44d-43b1-a6d4-eabcf5ffe671\") " pod="openshift-console-operator/console-operator-58897d9998-g476l" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.068016 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrrf4\" (UniqueName: \"kubernetes.io/projected/5b26aa16-ebd4-47e8-bc74-c4e5185df358-kube-api-access-zrrf4\") pod \"openshift-config-operator-7777fb866f-2ct5b\" (UID: \"5b26aa16-ebd4-47e8-bc74-c4e5185df358\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2ct5b" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.068072 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b82cc492-857e-4eaf-8e18-87e830bdc9f6-console-oauth-config\") pod \"console-f9d7485db-jh6kp\" (UID: \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\") " pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.068505 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.068788 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.068936 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.071918 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lp2rk"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.072572 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lp2rk" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.073435 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zq8bp"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.074215 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zq8bp" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.074293 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.074951 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-bmh84"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.075464 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bmh84" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.076408 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wnsq8"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.084211 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.085248 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.085798 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.085957 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.086027 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.086831 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.086090 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.087291 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psf87"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.086003 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.087201 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.087664 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.087694 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.087994 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psf87" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.088680 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-m8c4r"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.104421 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wnsq8" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.133543 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.135268 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.136426 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.136559 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.136635 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.136716 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.136839 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.136869 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.136948 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.136985 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.137020 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.137150 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.137238 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.137314 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.137398 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.137466 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.137571 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.137647 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.141535 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.141654 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.141909 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.142331 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.143282 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.143536 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.143641 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.144085 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.144193 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.144262 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.144395 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.144472 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.144642 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.144826 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.153742 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.154478 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.155533 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.157796 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.158887 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.160563 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.160752 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.161070 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.161839 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.165971 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kpx2c"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.166362 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v6tj4"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.166641 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xn8dz"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.166961 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2k8mm"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.167199 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pnz94"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.167428 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pjv6m"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.167782 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-pjv6m" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.168190 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-kpx2c" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.170478 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v6tj4" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.170756 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.171009 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2k8mm" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.171125 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-pnz94" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.171845 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/315cb527-e73b-4e1f-bc55-09c5c694cef9-serving-cert\") pod \"apiserver-76f77b778f-m8c4r\" (UID: \"315cb527-e73b-4e1f-bc55-09c5c694cef9\") " pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.171893 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c4d04e4-638e-4b88-a629-951d94c6b23e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lp2rk\" (UID: \"4c4d04e4-638e-4b88-a629-951d94c6b23e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lp2rk" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.171915 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/315cb527-e73b-4e1f-bc55-09c5c694cef9-etcd-serving-ca\") pod \"apiserver-76f77b778f-m8c4r\" (UID: \"315cb527-e73b-4e1f-bc55-09c5c694cef9\") " pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.171932 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d-serving-cert\") pod \"controller-manager-879f6c89f-wnsq8\" (UID: \"fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wnsq8" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.171950 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2g79\" (UniqueName: \"kubernetes.io/projected/5956ca03-413a-4077-9c54-5cd45f278f0f-kube-api-access-v2g79\") pod \"control-plane-machine-set-operator-78cbb6b69f-psf87\" (UID: \"5956ca03-413a-4077-9c54-5cd45f278f0f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psf87" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.171971 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d-config\") pod \"controller-manager-879f6c89f-wnsq8\" (UID: \"fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wnsq8" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.172044 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d174189-03c1-40c5-9304-44f925f565c7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zq8bp\" (UID: \"0d174189-03c1-40c5-9304-44f925f565c7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zq8bp" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.172083 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-5sd4x"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.172192 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.172274 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/619941ce-9ead-4926-b8c0-f9108cd58462-etcd-client\") pod \"apiserver-7bbb656c7d-6ghbf\" (UID: \"619941ce-9ead-4926-b8c0-f9108cd58462\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.172303 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d-client-ca\") pod \"controller-manager-879f6c89f-wnsq8\" (UID: \"fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wnsq8" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.172337 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d174189-03c1-40c5-9304-44f925f565c7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zq8bp\" (UID: \"0d174189-03c1-40c5-9304-44f925f565c7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zq8bp" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.172364 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/619941ce-9ead-4926-b8c0-f9108cd58462-encryption-config\") pod \"apiserver-7bbb656c7d-6ghbf\" (UID: \"619941ce-9ead-4926-b8c0-f9108cd58462\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.172388 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.172430 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.172478 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.172508 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/619941ce-9ead-4926-b8c0-f9108cd58462-audit-dir\") pod \"apiserver-7bbb656c7d-6ghbf\" (UID: \"619941ce-9ead-4926-b8c0-f9108cd58462\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.172541 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9k7t\" (UniqueName: \"kubernetes.io/projected/5e213e66-9429-41a1-9b53-476794092c7f-kube-api-access-w9k7t\") pod \"downloads-7954f5f757-bmh84\" (UID: \"5e213e66-9429-41a1-9b53-476794092c7f\") " pod="openshift-console/downloads-7954f5f757-bmh84" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.172573 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/5b26aa16-ebd4-47e8-bc74-c4e5185df358-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2ct5b\" (UID: \"5b26aa16-ebd4-47e8-bc74-c4e5185df358\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2ct5b" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.172603 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj98g\" (UniqueName: \"kubernetes.io/projected/7794f4fb-e44d-43b1-a6d4-eabcf5ffe671-kube-api-access-mj98g\") pod \"console-operator-58897d9998-g476l\" (UID: \"7794f4fb-e44d-43b1-a6d4-eabcf5ffe671\") " pod="openshift-console-operator/console-operator-58897d9998-g476l" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.172628 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.172660 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-console-config\") pod \"console-f9d7485db-jh6kp\" (UID: \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\") " pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.172689 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/315cb527-e73b-4e1f-bc55-09c5c694cef9-audit-dir\") pod \"apiserver-76f77b778f-m8c4r\" (UID: \"315cb527-e73b-4e1f-bc55-09c5c694cef9\") " pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.172710 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5sd4x" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.172724 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfwf9\" (UniqueName: \"kubernetes.io/projected/b82cc492-857e-4eaf-8e18-87e830bdc9f6-kube-api-access-sfwf9\") pod \"console-f9d7485db-jh6kp\" (UID: \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\") " pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.172750 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/315cb527-e73b-4e1f-bc55-09c5c694cef9-node-pullsecrets\") pod \"apiserver-76f77b778f-m8c4r\" (UID: \"315cb527-e73b-4e1f-bc55-09c5c694cef9\") " pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.172775 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrsrs\" (UniqueName: \"kubernetes.io/projected/4c4d04e4-638e-4b88-a629-951d94c6b23e-kube-api-access-jrsrs\") pod \"openshift-apiserver-operator-796bbdcf4f-lp2rk\" (UID: \"4c4d04e4-638e-4b88-a629-951d94c6b23e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lp2rk" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.172804 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppjlw\" (UniqueName: \"kubernetes.io/projected/fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d-kube-api-access-ppjlw\") pod \"controller-manager-879f6c89f-wnsq8\" (UID: \"fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wnsq8" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.172846 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fwnx\" (UniqueName: \"kubernetes.io/projected/622b1b03-6c1d-460c-ac51-10046c682195-kube-api-access-8fwnx\") pod \"route-controller-manager-6576b87f9c-4t27t\" (UID: \"622b1b03-6c1d-460c-ac51-10046c682195\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4t27t" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.172898 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7794f4fb-e44d-43b1-a6d4-eabcf5ffe671-trusted-ca\") pod \"console-operator-58897d9998-g476l\" (UID: \"7794f4fb-e44d-43b1-a6d4-eabcf5ffe671\") " pod="openshift-console-operator/console-operator-58897d9998-g476l" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.172926 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/315cb527-e73b-4e1f-bc55-09c5c694cef9-etcd-client\") pod \"apiserver-76f77b778f-m8c4r\" (UID: \"315cb527-e73b-4e1f-bc55-09c5c694cef9\") " pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.172958 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c4d04e4-638e-4b88-a629-951d94c6b23e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lp2rk\" (UID: \"4c4d04e4-638e-4b88-a629-951d94c6b23e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lp2rk" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.172984 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.173012 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7794f4fb-e44d-43b1-a6d4-eabcf5ffe671-config\") pod \"console-operator-58897d9998-g476l\" (UID: \"7794f4fb-e44d-43b1-a6d4-eabcf5ffe671\") " pod="openshift-console-operator/console-operator-58897d9998-g476l" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.173040 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.173069 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrrf4\" (UniqueName: \"kubernetes.io/projected/5b26aa16-ebd4-47e8-bc74-c4e5185df358-kube-api-access-zrrf4\") pod \"openshift-config-operator-7777fb866f-2ct5b\" (UID: \"5b26aa16-ebd4-47e8-bc74-c4e5185df358\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2ct5b" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.173115 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b82cc492-857e-4eaf-8e18-87e830bdc9f6-console-oauth-config\") pod \"console-f9d7485db-jh6kp\" (UID: \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\") " pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.173140 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wnsq8\" (UID: \"fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wnsq8" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.173186 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/619941ce-9ead-4926-b8c0-f9108cd58462-serving-cert\") pod \"apiserver-7bbb656c7d-6ghbf\" (UID: \"619941ce-9ead-4926-b8c0-f9108cd58462\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.173205 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b26aa16-ebd4-47e8-bc74-c4e5185df358-serving-cert\") pod \"openshift-config-operator-7777fb866f-2ct5b\" (UID: \"5b26aa16-ebd4-47e8-bc74-c4e5185df358\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2ct5b" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.173222 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.173243 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-service-ca\") pod \"console-f9d7485db-jh6kp\" (UID: \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\") " pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.173260 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/315cb527-e73b-4e1f-bc55-09c5c694cef9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-m8c4r\" (UID: \"315cb527-e73b-4e1f-bc55-09c5c694cef9\") " pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.173277 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/315cb527-e73b-4e1f-bc55-09c5c694cef9-config\") pod \"apiserver-76f77b778f-m8c4r\" (UID: \"315cb527-e73b-4e1f-bc55-09c5c694cef9\") " pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.173295 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5956ca03-413a-4077-9c54-5cd45f278f0f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-psf87\" (UID: \"5956ca03-413a-4077-9c54-5cd45f278f0f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psf87" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.173309 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5b8k\" (UniqueName: \"kubernetes.io/projected/619941ce-9ead-4926-b8c0-f9108cd58462-kube-api-access-j5b8k\") pod \"apiserver-7bbb656c7d-6ghbf\" (UID: \"619941ce-9ead-4926-b8c0-f9108cd58462\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.173325 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/622b1b03-6c1d-460c-ac51-10046c682195-serving-cert\") pod \"route-controller-manager-6576b87f9c-4t27t\" (UID: \"622b1b03-6c1d-460c-ac51-10046c682195\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4t27t" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.173344 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-oauth-serving-cert\") pod \"console-f9d7485db-jh6kp\" (UID: \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\") " pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.173364 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7794f4fb-e44d-43b1-a6d4-eabcf5ffe671-serving-cert\") pod \"console-operator-58897d9998-g476l\" (UID: \"7794f4fb-e44d-43b1-a6d4-eabcf5ffe671\") " pod="openshift-console-operator/console-operator-58897d9998-g476l" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.173382 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkxqg\" (UniqueName: \"kubernetes.io/projected/315cb527-e73b-4e1f-bc55-09c5c694cef9-kube-api-access-nkxqg\") pod \"apiserver-76f77b778f-m8c4r\" (UID: \"315cb527-e73b-4e1f-bc55-09c5c694cef9\") " pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.173404 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b82cc492-857e-4eaf-8e18-87e830bdc9f6-console-serving-cert\") pod \"console-f9d7485db-jh6kp\" (UID: \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\") " pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.173423 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/315cb527-e73b-4e1f-bc55-09c5c694cef9-audit\") pod \"apiserver-76f77b778f-m8c4r\" (UID: \"315cb527-e73b-4e1f-bc55-09c5c694cef9\") " pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.173439 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/315cb527-e73b-4e1f-bc55-09c5c694cef9-image-import-ca\") pod \"apiserver-76f77b778f-m8c4r\" (UID: \"315cb527-e73b-4e1f-bc55-09c5c694cef9\") " pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.173648 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-qv9jt"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.174119 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qv9jt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.175673 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7794f4fb-e44d-43b1-a6d4-eabcf5ffe671-trusted-ca\") pod \"console-operator-58897d9998-g476l\" (UID: \"7794f4fb-e44d-43b1-a6d4-eabcf5ffe671\") " pod="openshift-console-operator/console-operator-58897d9998-g476l" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.176193 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/5b26aa16-ebd4-47e8-bc74-c4e5185df358-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2ct5b\" (UID: \"5b26aa16-ebd4-47e8-bc74-c4e5185df358\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2ct5b" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.177200 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.177236 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64md4\" (UniqueName: \"kubernetes.io/projected/c82bf83f-1d94-41a2-ad18-2264806dd9ff-kube-api-access-64md4\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.177279 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/619941ce-9ead-4926-b8c0-f9108cd58462-audit-policies\") pod \"apiserver-7bbb656c7d-6ghbf\" (UID: \"619941ce-9ead-4926-b8c0-f9108cd58462\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.177296 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/619941ce-9ead-4926-b8c0-f9108cd58462-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6ghbf\" (UID: \"619941ce-9ead-4926-b8c0-f9108cd58462\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.177338 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/622b1b03-6c1d-460c-ac51-10046c682195-config\") pod \"route-controller-manager-6576b87f9c-4t27t\" (UID: \"622b1b03-6c1d-460c-ac51-10046c682195\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4t27t" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.177365 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/619941ce-9ead-4926-b8c0-f9108cd58462-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6ghbf\" (UID: \"619941ce-9ead-4926-b8c0-f9108cd58462\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.177388 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d174189-03c1-40c5-9304-44f925f565c7-config\") pod \"kube-apiserver-operator-766d6c64bb-zq8bp\" (UID: \"0d174189-03c1-40c5-9304-44f925f565c7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zq8bp" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.177408 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.177452 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-trusted-ca-bundle\") pod \"console-f9d7485db-jh6kp\" (UID: \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\") " pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.177469 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/315cb527-e73b-4e1f-bc55-09c5c694cef9-encryption-config\") pod \"apiserver-76f77b778f-m8c4r\" (UID: \"315cb527-e73b-4e1f-bc55-09c5c694cef9\") " pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.177538 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c82bf83f-1d94-41a2-ad18-2264806dd9ff-audit-policies\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.177558 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c82bf83f-1d94-41a2-ad18-2264806dd9ff-audit-dir\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.177575 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.177595 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/622b1b03-6c1d-460c-ac51-10046c682195-client-ca\") pod \"route-controller-manager-6576b87f9c-4t27t\" (UID: \"622b1b03-6c1d-460c-ac51-10046c682195\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4t27t" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.179225 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7794f4fb-e44d-43b1-a6d4-eabcf5ffe671-config\") pod \"console-operator-58897d9998-g476l\" (UID: \"7794f4fb-e44d-43b1-a6d4-eabcf5ffe671\") " pod="openshift-console-operator/console-operator-58897d9998-g476l" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.185574 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.185729 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.185761 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.185946 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.185986 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.186128 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.186181 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.186281 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.186345 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.186479 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.186660 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.186285 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.187035 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.188572 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7794f4fb-e44d-43b1-a6d4-eabcf5ffe671-serving-cert\") pod \"console-operator-58897d9998-g476l\" (UID: \"7794f4fb-e44d-43b1-a6d4-eabcf5ffe671\") " pod="openshift-console-operator/console-operator-58897d9998-g476l" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.189953 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.201402 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.201567 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.189950 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b26aa16-ebd4-47e8-bc74-c4e5185df358-serving-cert\") pod \"openshift-config-operator-7777fb866f-2ct5b\" (UID: \"5b26aa16-ebd4-47e8-bc74-c4e5185df358\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2ct5b" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.208913 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.209580 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.209905 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.210750 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.210826 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.211721 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.212722 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.212961 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.215477 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.220582 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rcr55"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.233103 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.233421 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.233433 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rcr55" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.233492 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.233633 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.234149 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.234271 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=2.234251353 podStartE2EDuration="2.234251353s" podCreationTimestamp="2026-01-22 10:26:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:49.201827147 +0000 UTC m=+88.431770055" watchObservedRunningTime="2026-01-22 10:26:49.234251353 +0000 UTC m=+88.464194261" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.234223 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.234536 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.235720 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5t8fw"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.236843 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5t8fw" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.238809 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.239263 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2ct5b"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.240926 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.241958 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.242284 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-g476l"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.243312 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.244489 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5f5qf"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.245119 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.245365 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.245656 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5f5qf" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.246896 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4t27t"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.249234 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zw6f2"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.249428 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-jh6kp"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.249548 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.251556 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.251751 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q88wn"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.252526 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q88wn" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.252634 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-djs29"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.253500 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-ldnv8"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.254014 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-ldnv8" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.254299 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-djs29" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.254350 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mhb4k"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.255573 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c5w5z"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.256023 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c5w5z" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.256303 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zq8bp"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.256383 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-mhb4k" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.258674 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ll2c4"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.259968 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ll2c4" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.259969 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lp2rk"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.260957 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.261151 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hrggg"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.262076 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hrggg" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.262373 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psf87"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.263332 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2zchx"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.264482 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2zchx" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.266240 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kpx2c"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.268216 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6sx2"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.269040 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6sx2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.269925 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r25qn"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.270339 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r25qn" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.271417 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bmh84"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.281422 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.281957 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c82bf83f-1d94-41a2-ad18-2264806dd9ff-audit-policies\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.282012 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c82bf83f-1d94-41a2-ad18-2264806dd9ff-audit-dir\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.282046 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.282076 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/622b1b03-6c1d-460c-ac51-10046c682195-client-ca\") pod \"route-controller-manager-6576b87f9c-4t27t\" (UID: \"622b1b03-6c1d-460c-ac51-10046c682195\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4t27t" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.282101 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/315cb527-e73b-4e1f-bc55-09c5c694cef9-serving-cert\") pod \"apiserver-76f77b778f-m8c4r\" (UID: \"315cb527-e73b-4e1f-bc55-09c5c694cef9\") " pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.282125 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c4d04e4-638e-4b88-a629-951d94c6b23e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lp2rk\" (UID: \"4c4d04e4-638e-4b88-a629-951d94c6b23e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lp2rk" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.282155 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/315cb527-e73b-4e1f-bc55-09c5c694cef9-etcd-serving-ca\") pod \"apiserver-76f77b778f-m8c4r\" (UID: \"315cb527-e73b-4e1f-bc55-09c5c694cef9\") " pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.282180 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d-serving-cert\") pod \"controller-manager-879f6c89f-wnsq8\" (UID: \"fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wnsq8" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.282204 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2g79\" (UniqueName: \"kubernetes.io/projected/5956ca03-413a-4077-9c54-5cd45f278f0f-kube-api-access-v2g79\") pod \"control-plane-machine-set-operator-78cbb6b69f-psf87\" (UID: \"5956ca03-413a-4077-9c54-5cd45f278f0f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psf87" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.282229 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d-config\") pod \"controller-manager-879f6c89f-wnsq8\" (UID: \"fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wnsq8" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.282259 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d174189-03c1-40c5-9304-44f925f565c7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zq8bp\" (UID: \"0d174189-03c1-40c5-9304-44f925f565c7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zq8bp" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.282289 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d174189-03c1-40c5-9304-44f925f565c7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zq8bp\" (UID: \"0d174189-03c1-40c5-9304-44f925f565c7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zq8bp" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.282315 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.282340 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/619941ce-9ead-4926-b8c0-f9108cd58462-etcd-client\") pod \"apiserver-7bbb656c7d-6ghbf\" (UID: \"619941ce-9ead-4926-b8c0-f9108cd58462\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.282366 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d-client-ca\") pod \"controller-manager-879f6c89f-wnsq8\" (UID: \"fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wnsq8" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.282391 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/619941ce-9ead-4926-b8c0-f9108cd58462-encryption-config\") pod \"apiserver-7bbb656c7d-6ghbf\" (UID: \"619941ce-9ead-4926-b8c0-f9108cd58462\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.282415 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.282443 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.282484 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.282517 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.282545 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/619941ce-9ead-4926-b8c0-f9108cd58462-audit-dir\") pod \"apiserver-7bbb656c7d-6ghbf\" (UID: \"619941ce-9ead-4926-b8c0-f9108cd58462\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.282570 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9k7t\" (UniqueName: \"kubernetes.io/projected/5e213e66-9429-41a1-9b53-476794092c7f-kube-api-access-w9k7t\") pod \"downloads-7954f5f757-bmh84\" (UID: \"5e213e66-9429-41a1-9b53-476794092c7f\") " pod="openshift-console/downloads-7954f5f757-bmh84" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.282606 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/315cb527-e73b-4e1f-bc55-09c5c694cef9-audit-dir\") pod \"apiserver-76f77b778f-m8c4r\" (UID: \"315cb527-e73b-4e1f-bc55-09c5c694cef9\") " pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.282636 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/315cb527-e73b-4e1f-bc55-09c5c694cef9-node-pullsecrets\") pod \"apiserver-76f77b778f-m8c4r\" (UID: \"315cb527-e73b-4e1f-bc55-09c5c694cef9\") " pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.282672 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fwnx\" (UniqueName: \"kubernetes.io/projected/622b1b03-6c1d-460c-ac51-10046c682195-kube-api-access-8fwnx\") pod \"route-controller-manager-6576b87f9c-4t27t\" (UID: \"622b1b03-6c1d-460c-ac51-10046c682195\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4t27t" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.282703 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrsrs\" (UniqueName: \"kubernetes.io/projected/4c4d04e4-638e-4b88-a629-951d94c6b23e-kube-api-access-jrsrs\") pod \"openshift-apiserver-operator-796bbdcf4f-lp2rk\" (UID: \"4c4d04e4-638e-4b88-a629-951d94c6b23e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lp2rk" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.282727 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppjlw\" (UniqueName: \"kubernetes.io/projected/fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d-kube-api-access-ppjlw\") pod \"controller-manager-879f6c89f-wnsq8\" (UID: \"fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wnsq8" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.282755 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/315cb527-e73b-4e1f-bc55-09c5c694cef9-etcd-client\") pod \"apiserver-76f77b778f-m8c4r\" (UID: \"315cb527-e73b-4e1f-bc55-09c5c694cef9\") " pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.282776 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c4d04e4-638e-4b88-a629-951d94c6b23e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lp2rk\" (UID: \"4c4d04e4-638e-4b88-a629-951d94c6b23e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lp2rk" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.282803 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.282828 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.282930 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wnsq8\" (UID: \"fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wnsq8" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.282973 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/619941ce-9ead-4926-b8c0-f9108cd58462-serving-cert\") pod \"apiserver-7bbb656c7d-6ghbf\" (UID: \"619941ce-9ead-4926-b8c0-f9108cd58462\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.283007 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.283034 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/315cb527-e73b-4e1f-bc55-09c5c694cef9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-m8c4r\" (UID: \"315cb527-e73b-4e1f-bc55-09c5c694cef9\") " pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.283057 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/315cb527-e73b-4e1f-bc55-09c5c694cef9-config\") pod \"apiserver-76f77b778f-m8c4r\" (UID: \"315cb527-e73b-4e1f-bc55-09c5c694cef9\") " pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.283085 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkxqg\" (UniqueName: \"kubernetes.io/projected/315cb527-e73b-4e1f-bc55-09c5c694cef9-kube-api-access-nkxqg\") pod \"apiserver-76f77b778f-m8c4r\" (UID: \"315cb527-e73b-4e1f-bc55-09c5c694cef9\") " pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.283111 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5956ca03-413a-4077-9c54-5cd45f278f0f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-psf87\" (UID: \"5956ca03-413a-4077-9c54-5cd45f278f0f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psf87" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.283134 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5b8k\" (UniqueName: \"kubernetes.io/projected/619941ce-9ead-4926-b8c0-f9108cd58462-kube-api-access-j5b8k\") pod \"apiserver-7bbb656c7d-6ghbf\" (UID: \"619941ce-9ead-4926-b8c0-f9108cd58462\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.283154 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/622b1b03-6c1d-460c-ac51-10046c682195-serving-cert\") pod \"route-controller-manager-6576b87f9c-4t27t\" (UID: \"622b1b03-6c1d-460c-ac51-10046c682195\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4t27t" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.283190 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/315cb527-e73b-4e1f-bc55-09c5c694cef9-audit\") pod \"apiserver-76f77b778f-m8c4r\" (UID: \"315cb527-e73b-4e1f-bc55-09c5c694cef9\") " pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.283216 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/315cb527-e73b-4e1f-bc55-09c5c694cef9-image-import-ca\") pod \"apiserver-76f77b778f-m8c4r\" (UID: \"315cb527-e73b-4e1f-bc55-09c5c694cef9\") " pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.283241 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.283265 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64md4\" (UniqueName: \"kubernetes.io/projected/c82bf83f-1d94-41a2-ad18-2264806dd9ff-kube-api-access-64md4\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.283290 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/619941ce-9ead-4926-b8c0-f9108cd58462-audit-policies\") pod \"apiserver-7bbb656c7d-6ghbf\" (UID: \"619941ce-9ead-4926-b8c0-f9108cd58462\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.283474 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/619941ce-9ead-4926-b8c0-f9108cd58462-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6ghbf\" (UID: \"619941ce-9ead-4926-b8c0-f9108cd58462\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.283657 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/622b1b03-6c1d-460c-ac51-10046c682195-config\") pod \"route-controller-manager-6576b87f9c-4t27t\" (UID: \"622b1b03-6c1d-460c-ac51-10046c682195\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4t27t" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.283693 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d174189-03c1-40c5-9304-44f925f565c7-config\") pod \"kube-apiserver-operator-766d6c64bb-zq8bp\" (UID: \"0d174189-03c1-40c5-9304-44f925f565c7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zq8bp" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.283721 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/619941ce-9ead-4926-b8c0-f9108cd58462-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6ghbf\" (UID: \"619941ce-9ead-4926-b8c0-f9108cd58462\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.283750 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.283842 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.283780 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/315cb527-e73b-4e1f-bc55-09c5c694cef9-encryption-config\") pod \"apiserver-76f77b778f-m8c4r\" (UID: \"315cb527-e73b-4e1f-bc55-09c5c694cef9\") " pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.284560 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c82bf83f-1d94-41a2-ad18-2264806dd9ff-audit-dir\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.284820 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/315cb527-e73b-4e1f-bc55-09c5c694cef9-etcd-serving-ca\") pod \"apiserver-76f77b778f-m8c4r\" (UID: \"315cb527-e73b-4e1f-bc55-09c5c694cef9\") " pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.285501 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.285818 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.286424 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.287150 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gx9kk"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.288739 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/619941ce-9ead-4926-b8c0-f9108cd58462-audit-dir\") pod \"apiserver-7bbb656c7d-6ghbf\" (UID: \"619941ce-9ead-4926-b8c0-f9108cd58462\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.288762 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/315cb527-e73b-4e1f-bc55-09c5c694cef9-audit-dir\") pod \"apiserver-76f77b778f-m8c4r\" (UID: \"315cb527-e73b-4e1f-bc55-09c5c694cef9\") " pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.293993 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/315cb527-e73b-4e1f-bc55-09c5c694cef9-node-pullsecrets\") pod \"apiserver-76f77b778f-m8c4r\" (UID: \"315cb527-e73b-4e1f-bc55-09c5c694cef9\") " pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.295387 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/315cb527-e73b-4e1f-bc55-09c5c694cef9-config\") pod \"apiserver-76f77b778f-m8c4r\" (UID: \"315cb527-e73b-4e1f-bc55-09c5c694cef9\") " pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.295588 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d-client-ca\") pod \"controller-manager-879f6c89f-wnsq8\" (UID: \"fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wnsq8" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.295785 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/315cb527-e73b-4e1f-bc55-09c5c694cef9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-m8c4r\" (UID: \"315cb527-e73b-4e1f-bc55-09c5c694cef9\") " pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.296026 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wnsq8\" (UID: \"fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wnsq8" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.296651 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/619941ce-9ead-4926-b8c0-f9108cd58462-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6ghbf\" (UID: \"619941ce-9ead-4926-b8c0-f9108cd58462\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.296831 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c4d04e4-638e-4b88-a629-951d94c6b23e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lp2rk\" (UID: \"4c4d04e4-638e-4b88-a629-951d94c6b23e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lp2rk" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.297297 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-nmbhc"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.297759 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-nmbhc" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.297899 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gx9kk" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.297982 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xkgxq"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.298044 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.298508 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d-config\") pod \"controller-manager-879f6c89f-wnsq8\" (UID: \"fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wnsq8" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.298698 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xkgxq" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.298779 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c4d04e4-638e-4b88-a629-951d94c6b23e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lp2rk\" (UID: \"4c4d04e4-638e-4b88-a629-951d94c6b23e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lp2rk" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.299113 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/619941ce-9ead-4926-b8c0-f9108cd58462-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6ghbf\" (UID: \"619941ce-9ead-4926-b8c0-f9108cd58462\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.299114 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c82bf83f-1d94-41a2-ad18-2264806dd9ff-audit-policies\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.300166 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/619941ce-9ead-4926-b8c0-f9108cd58462-etcd-client\") pod \"apiserver-7bbb656c7d-6ghbf\" (UID: \"619941ce-9ead-4926-b8c0-f9108cd58462\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.300229 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wgf7v"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.300259 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/622b1b03-6c1d-460c-ac51-10046c682195-client-ca\") pod \"route-controller-manager-6576b87f9c-4t27t\" (UID: \"622b1b03-6c1d-460c-ac51-10046c682195\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4t27t" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.300935 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-wm2bq"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.301315 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.301529 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wm2bq" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.301939 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wgf7v" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.302068 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.302475 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.302725 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5956ca03-413a-4077-9c54-5cd45f278f0f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-psf87\" (UID: \"5956ca03-413a-4077-9c54-5cd45f278f0f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psf87" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.303318 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/619941ce-9ead-4926-b8c0-f9108cd58462-audit-policies\") pod \"apiserver-7bbb656c7d-6ghbf\" (UID: \"619941ce-9ead-4926-b8c0-f9108cd58462\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.303434 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.303467 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/315cb527-e73b-4e1f-bc55-09c5c694cef9-image-import-ca\") pod \"apiserver-76f77b778f-m8c4r\" (UID: \"315cb527-e73b-4e1f-bc55-09c5c694cef9\") " pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.303545 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/622b1b03-6c1d-460c-ac51-10046c682195-config\") pod \"route-controller-manager-6576b87f9c-4t27t\" (UID: \"622b1b03-6c1d-460c-ac51-10046c682195\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4t27t" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.303927 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/622b1b03-6c1d-460c-ac51-10046c682195-serving-cert\") pod \"route-controller-manager-6576b87f9c-4t27t\" (UID: \"622b1b03-6c1d-460c-ac51-10046c682195\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4t27t" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.303981 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/315cb527-e73b-4e1f-bc55-09c5c694cef9-audit\") pod \"apiserver-76f77b778f-m8c4r\" (UID: \"315cb527-e73b-4e1f-bc55-09c5c694cef9\") " pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.304018 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/619941ce-9ead-4926-b8c0-f9108cd58462-serving-cert\") pod \"apiserver-7bbb656c7d-6ghbf\" (UID: \"619941ce-9ead-4926-b8c0-f9108cd58462\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.304198 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/315cb527-e73b-4e1f-bc55-09c5c694cef9-encryption-config\") pod \"apiserver-76f77b778f-m8c4r\" (UID: \"315cb527-e73b-4e1f-bc55-09c5c694cef9\") " pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.304260 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-pt8rf"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.304506 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.304546 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d-serving-cert\") pod \"controller-manager-879f6c89f-wnsq8\" (UID: \"fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wnsq8" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.304639 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/619941ce-9ead-4926-b8c0-f9108cd58462-encryption-config\") pod \"apiserver-7bbb656c7d-6ghbf\" (UID: \"619941ce-9ead-4926-b8c0-f9108cd58462\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.304652 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.304845 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d174189-03c1-40c5-9304-44f925f565c7-config\") pod \"kube-apiserver-operator-766d6c64bb-zq8bp\" (UID: \"0d174189-03c1-40c5-9304-44f925f565c7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zq8bp" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.305079 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pt8rf" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.305808 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484615-wmzmg"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.306076 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/315cb527-e73b-4e1f-bc55-09c5c694cef9-serving-cert\") pod \"apiserver-76f77b778f-m8c4r\" (UID: \"315cb527-e73b-4e1f-bc55-09c5c694cef9\") " pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.306416 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-wmzmg" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.307033 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-m8c4r"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.307669 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.308216 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/315cb527-e73b-4e1f-bc55-09c5c694cef9-etcd-client\") pod \"apiserver-76f77b778f-m8c4r\" (UID: \"315cb527-e73b-4e1f-bc55-09c5c694cef9\") " pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.308593 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q88wn"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.309182 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d174189-03c1-40c5-9304-44f925f565c7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zq8bp\" (UID: \"0d174189-03c1-40c5-9304-44f925f565c7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zq8bp" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.309301 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.310231 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2k8mm"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.311350 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v6tj4"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.312479 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pjv6m"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.314360 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pnz94"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.315523 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wnsq8"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.316655 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-nmbhc"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.318064 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5f5qf"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.319538 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.320078 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-qv9jt"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.321554 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mhb4k"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.324270 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-g5js4"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.325759 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-tbzz8"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.325891 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-g5js4" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.326627 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-tbzz8" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.328777 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6sx2"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.330390 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xn8dz"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.331819 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5t8fw"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.333673 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wgf7v"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.335693 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c5w5z"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.336847 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-pt8rf"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.337885 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xkgxq"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.338975 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rcr55"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.339349 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.342847 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-djs29"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.344954 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gx9kk"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.345518 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2zchx"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.347021 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484615-wmzmg"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.348096 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hrggg"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.349248 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ll2c4"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.350230 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r25qn"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.351339 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-g5js4"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.352333 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-tbzz8"] Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.359080 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.378957 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.398546 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.420044 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.439444 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.460192 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.478507 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.500079 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.535407 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrrf4\" (UniqueName: \"kubernetes.io/projected/5b26aa16-ebd4-47e8-bc74-c4e5185df358-kube-api-access-zrrf4\") pod \"openshift-config-operator-7777fb866f-2ct5b\" (UID: \"5b26aa16-ebd4-47e8-bc74-c4e5185df358\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2ct5b" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.552395 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfwf9\" (UniqueName: \"kubernetes.io/projected/b82cc492-857e-4eaf-8e18-87e830bdc9f6-kube-api-access-sfwf9\") pod \"console-f9d7485db-jh6kp\" (UID: \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\") " pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.559839 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.579215 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.599640 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.619738 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.648935 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.675724 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2ct5b" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.682330 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj98g\" (UniqueName: \"kubernetes.io/projected/7794f4fb-e44d-43b1-a6d4-eabcf5ffe671-kube-api-access-mj98g\") pod \"console-operator-58897d9998-g476l\" (UID: \"7794f4fb-e44d-43b1-a6d4-eabcf5ffe671\") " pod="openshift-console-operator/console-operator-58897d9998-g476l" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.699242 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.719214 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.738868 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.759130 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.779178 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.799919 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.819987 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.839518 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.860093 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.879290 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.899227 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.919312 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.932203 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2ct5b"] Jan 22 10:26:49 crc kubenswrapper[4752]: W0122 10:26:49.938406 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b26aa16_ebd4_47e8_bc74_c4e5185df358.slice/crio-15dd0844d7ccb7edb9b2a269b2447c3f29b5b33b1ee56bbbbdbbe5eb87d890d0 WatchSource:0}: Error finding container 15dd0844d7ccb7edb9b2a269b2447c3f29b5b33b1ee56bbbbdbbe5eb87d890d0: Status 404 returned error can't find the container with id 15dd0844d7ccb7edb9b2a269b2447c3f29b5b33b1ee56bbbbdbbe5eb87d890d0 Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.952260 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-g476l" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.958541 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.980231 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 22 10:26:49 crc kubenswrapper[4752]: I0122 10:26:49.999403 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.020415 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.042540 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.061944 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.079502 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.097405 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.099319 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.120759 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.139422 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.161535 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.162057 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-g476l"] Jan 22 10:26:50 crc kubenswrapper[4752]: E0122 10:26:50.174414 4752 configmap.go:193] Couldn't get configMap openshift-console/console-config: failed to sync configmap cache: timed out waiting for the condition Jan 22 10:26:50 crc kubenswrapper[4752]: E0122 10:26:50.174509 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-console-config podName:b82cc492-857e-4eaf-8e18-87e830bdc9f6 nodeName:}" failed. No retries permitted until 2026-01-22 10:26:50.674483546 +0000 UTC m=+89.904426484 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-console-config") pod "console-f9d7485db-jh6kp" (UID: "b82cc492-857e-4eaf-8e18-87e830bdc9f6") : failed to sync configmap cache: timed out waiting for the condition Jan 22 10:26:50 crc kubenswrapper[4752]: E0122 10:26:50.177231 4752 secret.go:188] Couldn't get secret openshift-console/console-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 10:26:50 crc kubenswrapper[4752]: E0122 10:26:50.177299 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b82cc492-857e-4eaf-8e18-87e830bdc9f6-console-serving-cert podName:b82cc492-857e-4eaf-8e18-87e830bdc9f6 nodeName:}" failed. No retries permitted until 2026-01-22 10:26:50.677282339 +0000 UTC m=+89.907225287 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b82cc492-857e-4eaf-8e18-87e830bdc9f6-console-serving-cert") pod "console-f9d7485db-jh6kp" (UID: "b82cc492-857e-4eaf-8e18-87e830bdc9f6") : failed to sync secret cache: timed out waiting for the condition Jan 22 10:26:50 crc kubenswrapper[4752]: E0122 10:26:50.177348 4752 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: failed to sync configmap cache: timed out waiting for the condition Jan 22 10:26:50 crc kubenswrapper[4752]: E0122 10:26:50.177385 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-oauth-serving-cert podName:b82cc492-857e-4eaf-8e18-87e830bdc9f6 nodeName:}" failed. No retries permitted until 2026-01-22 10:26:50.677373762 +0000 UTC m=+89.907316700 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-oauth-serving-cert") pod "console-f9d7485db-jh6kp" (UID: "b82cc492-857e-4eaf-8e18-87e830bdc9f6") : failed to sync configmap cache: timed out waiting for the condition Jan 22 10:26:50 crc kubenswrapper[4752]: E0122 10:26:50.177454 4752 configmap.go:193] Couldn't get configMap openshift-console/service-ca: failed to sync configmap cache: timed out waiting for the condition Jan 22 10:26:50 crc kubenswrapper[4752]: E0122 10:26:50.177494 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-service-ca podName:b82cc492-857e-4eaf-8e18-87e830bdc9f6 nodeName:}" failed. No retries permitted until 2026-01-22 10:26:50.677481485 +0000 UTC m=+89.907424433 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-service-ca") pod "console-f9d7485db-jh6kp" (UID: "b82cc492-857e-4eaf-8e18-87e830bdc9f6") : failed to sync configmap cache: timed out waiting for the condition Jan 22 10:26:50 crc kubenswrapper[4752]: E0122 10:26:50.177526 4752 secret.go:188] Couldn't get secret openshift-console/console-oauth-config: failed to sync secret cache: timed out waiting for the condition Jan 22 10:26:50 crc kubenswrapper[4752]: E0122 10:26:50.177563 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b82cc492-857e-4eaf-8e18-87e830bdc9f6-console-oauth-config podName:b82cc492-857e-4eaf-8e18-87e830bdc9f6 nodeName:}" failed. No retries permitted until 2026-01-22 10:26:50.677552507 +0000 UTC m=+89.907495445 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/b82cc492-857e-4eaf-8e18-87e830bdc9f6-console-oauth-config") pod "console-f9d7485db-jh6kp" (UID: "b82cc492-857e-4eaf-8e18-87e830bdc9f6") : failed to sync secret cache: timed out waiting for the condition Jan 22 10:26:50 crc kubenswrapper[4752]: E0122 10:26:50.177633 4752 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 22 10:26:50 crc kubenswrapper[4752]: E0122 10:26:50.177667 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-trusted-ca-bundle podName:b82cc492-857e-4eaf-8e18-87e830bdc9f6 nodeName:}" failed. No retries permitted until 2026-01-22 10:26:50.677656759 +0000 UTC m=+89.907599697 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-trusted-ca-bundle") pod "console-f9d7485db-jh6kp" (UID: "b82cc492-857e-4eaf-8e18-87e830bdc9f6") : failed to sync configmap cache: timed out waiting for the condition Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.179796 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 22 10:26:50 crc kubenswrapper[4752]: W0122 10:26:50.180215 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7794f4fb_e44d_43b1_a6d4_eabcf5ffe671.slice/crio-ba217677db424d116c81e0537317d56dcdd5d381de546f5786bef7554bdea9f9 WatchSource:0}: Error finding container ba217677db424d116c81e0537317d56dcdd5d381de546f5786bef7554bdea9f9: Status 404 returned error can't find the container with id ba217677db424d116c81e0537317d56dcdd5d381de546f5786bef7554bdea9f9 Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.198655 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.219502 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.239921 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.258060 4752 request.go:700] Waited for 1.001710769s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-dockercfg-qt55r&limit=500&resourceVersion=0 Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.260337 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.279537 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.299204 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.320783 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.339330 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.359203 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.381215 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.400532 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.420244 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.440173 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.460273 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.480843 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.500075 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.520755 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.539268 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.560165 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.580165 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.599436 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.632637 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-g476l" event={"ID":"7794f4fb-e44d-43b1-a6d4-eabcf5ffe671","Type":"ContainerStarted","Data":"dbf5ad16c41c8937663eaf0c6a4f41615acbf72327ea366eb5144aa7684dc29e"} Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.632760 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-g476l" event={"ID":"7794f4fb-e44d-43b1-a6d4-eabcf5ffe671","Type":"ContainerStarted","Data":"ba217677db424d116c81e0537317d56dcdd5d381de546f5786bef7554bdea9f9"} Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.633048 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-g476l" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.635104 4752 patch_prober.go:28] interesting pod/console-operator-58897d9998-g476l container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/readyz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.635192 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-g476l" podUID="7794f4fb-e44d-43b1-a6d4-eabcf5ffe671" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.5:8443/readyz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.636269 4752 generic.go:334] "Generic (PLEG): container finished" podID="5b26aa16-ebd4-47e8-bc74-c4e5185df358" containerID="e1a5236b347636dd03b2de9df10f127583aa484dbc3cfe50b804d6d0e37ccbad" exitCode=0 Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.636319 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2ct5b" event={"ID":"5b26aa16-ebd4-47e8-bc74-c4e5185df358","Type":"ContainerDied","Data":"e1a5236b347636dd03b2de9df10f127583aa484dbc3cfe50b804d6d0e37ccbad"} Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.636352 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2ct5b" event={"ID":"5b26aa16-ebd4-47e8-bc74-c4e5185df358","Type":"ContainerStarted","Data":"15dd0844d7ccb7edb9b2a269b2447c3f29b5b33b1ee56bbbbdbbe5eb87d890d0"} Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.652245 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppjlw\" (UniqueName: \"kubernetes.io/projected/fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d-kube-api-access-ppjlw\") pod \"controller-manager-879f6c89f-wnsq8\" (UID: \"fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wnsq8" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.660113 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkxqg\" (UniqueName: \"kubernetes.io/projected/315cb527-e73b-4e1f-bc55-09c5c694cef9-kube-api-access-nkxqg\") pod \"apiserver-76f77b778f-m8c4r\" (UID: \"315cb527-e73b-4e1f-bc55-09c5c694cef9\") " pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.680920 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2g79\" (UniqueName: \"kubernetes.io/projected/5956ca03-413a-4077-9c54-5cd45f278f0f-kube-api-access-v2g79\") pod \"control-plane-machine-set-operator-78cbb6b69f-psf87\" (UID: \"5956ca03-413a-4077-9c54-5cd45f278f0f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psf87" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.694910 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrsrs\" (UniqueName: \"kubernetes.io/projected/4c4d04e4-638e-4b88-a629-951d94c6b23e-kube-api-access-jrsrs\") pod \"openshift-apiserver-operator-796bbdcf4f-lp2rk\" (UID: \"4c4d04e4-638e-4b88-a629-951d94c6b23e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lp2rk" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.698707 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.700158 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b82cc492-857e-4eaf-8e18-87e830bdc9f6-console-oauth-config\") pod \"console-f9d7485db-jh6kp\" (UID: \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\") " pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.700245 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-service-ca\") pod \"console-f9d7485db-jh6kp\" (UID: \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\") " pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.700301 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-oauth-serving-cert\") pod \"console-f9d7485db-jh6kp\" (UID: \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\") " pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.700335 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b82cc492-857e-4eaf-8e18-87e830bdc9f6-console-serving-cert\") pod \"console-f9d7485db-jh6kp\" (UID: \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\") " pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.700388 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-trusted-ca-bundle\") pod \"console-f9d7485db-jh6kp\" (UID: \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\") " pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.700477 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-console-config\") pod \"console-f9d7485db-jh6kp\" (UID: \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\") " pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.710333 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lp2rk" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.719541 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.740127 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.760181 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wnsq8" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.760944 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.779759 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.784483 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.800635 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.825345 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.842029 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.860016 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.882066 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.914396 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5b8k\" (UniqueName: \"kubernetes.io/projected/619941ce-9ead-4926-b8c0-f9108cd58462-kube-api-access-j5b8k\") pod \"apiserver-7bbb656c7d-6ghbf\" (UID: \"619941ce-9ead-4926-b8c0-f9108cd58462\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.920221 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.954804 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d174189-03c1-40c5-9304-44f925f565c7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zq8bp\" (UID: \"0d174189-03c1-40c5-9304-44f925f565c7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zq8bp" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.965621 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psf87" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.979644 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64md4\" (UniqueName: \"kubernetes.io/projected/c82bf83f-1d94-41a2-ad18-2264806dd9ff-kube-api-access-64md4\") pod \"oauth-openshift-558db77b4-zw6f2\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:50 crc kubenswrapper[4752]: I0122 10:26:50.989025 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-m8c4r"] Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.002005 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fwnx\" (UniqueName: \"kubernetes.io/projected/622b1b03-6c1d-460c-ac51-10046c682195-kube-api-access-8fwnx\") pod \"route-controller-manager-6576b87f9c-4t27t\" (UID: \"622b1b03-6c1d-460c-ac51-10046c682195\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4t27t" Jan 22 10:26:51 crc kubenswrapper[4752]: W0122 10:26:51.010969 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod315cb527_e73b_4e1f_bc55_09c5c694cef9.slice/crio-6d02173952fc65533fd90b23cbc8203f113c469b4c3f5556fd26c1dc50242d7e WatchSource:0}: Error finding container 6d02173952fc65533fd90b23cbc8203f113c469b4c3f5556fd26c1dc50242d7e: Status 404 returned error can't find the container with id 6d02173952fc65533fd90b23cbc8203f113c469b4c3f5556fd26c1dc50242d7e Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.015435 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9k7t\" (UniqueName: \"kubernetes.io/projected/5e213e66-9429-41a1-9b53-476794092c7f-kube-api-access-w9k7t\") pod \"downloads-7954f5f757-bmh84\" (UID: \"5e213e66-9429-41a1-9b53-476794092c7f\") " pod="openshift-console/downloads-7954f5f757-bmh84" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.019038 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.035065 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zq8bp" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.038232 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wnsq8"] Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.038953 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.045685 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bmh84" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.049934 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf" Jan 22 10:26:51 crc kubenswrapper[4752]: W0122 10:26:51.058673 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfad3a088_8bfc_4fd4_9e82_65d5c43c3f6d.slice/crio-d15035fdcb7027328fcbc6342611d2706887e7c04f2a9eeaf902294558e344b8 WatchSource:0}: Error finding container d15035fdcb7027328fcbc6342611d2706887e7c04f2a9eeaf902294558e344b8: Status 404 returned error can't find the container with id d15035fdcb7027328fcbc6342611d2706887e7c04f2a9eeaf902294558e344b8 Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.060047 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.079798 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.100626 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.119331 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.120199 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lp2rk"] Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.141351 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.159064 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.168370 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psf87"] Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.181454 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.187545 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.199667 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.219737 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.239021 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.241368 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4t27t" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.248590 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zq8bp"] Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.258073 4752 request.go:700] Waited for 1.951435621s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dcollect-profiles-config&limit=500&resourceVersion=0 Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.259649 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.279238 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 10:26:51 crc kubenswrapper[4752]: W0122 10:26:51.287819 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d174189_03c1_40c5_9304_44f925f565c7.slice/crio-95176b7ded29405d172c8a06267d0dd2c8efd490355ffe0dc1192000133ffefe WatchSource:0}: Error finding container 95176b7ded29405d172c8a06267d0dd2c8efd490355ffe0dc1192000133ffefe: Status 404 returned error can't find the container with id 95176b7ded29405d172c8a06267d0dd2c8efd490355ffe0dc1192000133ffefe Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.303091 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.319198 4752 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.328736 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bmh84"] Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.340528 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.359436 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.379757 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 22 10:26:51 crc kubenswrapper[4752]: W0122 10:26:51.393225 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e213e66_9429_41a1_9b53_476794092c7f.slice/crio-dfc80f2755588f0f26abc236ec1847b78c20d53ec87fdd4e1f3cfe468b498e33 WatchSource:0}: Error finding container dfc80f2755588f0f26abc236ec1847b78c20d53ec87fdd4e1f3cfe468b498e33: Status 404 returned error can't find the container with id dfc80f2755588f0f26abc236ec1847b78c20d53ec87fdd4e1f3cfe468b498e33 Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.399010 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.462518 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.479538 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.486180 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zw6f2"] Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.506167 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.514434 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-service-ca\") pod \"console-f9d7485db-jh6kp\" (UID: \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\") " pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.514473 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9b3c578d-cefd-4be8-98dc-a646ebc2a3df-auth-proxy-config\") pod \"machine-approver-56656f9798-5sd4x\" (UID: \"9b3c578d-cefd-4be8-98dc-a646ebc2a3df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5sd4x" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.514526 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr2pt\" (UniqueName: \"kubernetes.io/projected/d0c57c83-4f36-4531-8f22-e3e37b49d843-kube-api-access-gr2pt\") pod \"kube-storage-version-migrator-operator-b67b599dd-q88wn\" (UID: \"d0c57c83-4f36-4531-8f22-e3e37b49d843\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q88wn" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.514572 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca13be73-3b56-4d7d-aaf4-547e7fbcec5f-serving-cert\") pod \"etcd-operator-b45778765-pnz94\" (UID: \"ca13be73-3b56-4d7d-aaf4-547e7fbcec5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pnz94" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.514602 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4140f15a-5e23-431b-ad69-a64d54325d19-registry-certificates\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.515393 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w848\" (UniqueName: \"kubernetes.io/projected/ca13be73-3b56-4d7d-aaf4-547e7fbcec5f-kube-api-access-5w848\") pod \"etcd-operator-b45778765-pnz94\" (UID: \"ca13be73-3b56-4d7d-aaf4-547e7fbcec5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pnz94" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.515425 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bee93ff7-fb56-4cb1-846c-790c91498c6b-service-ca-bundle\") pod \"authentication-operator-69f744f599-kpx2c\" (UID: \"bee93ff7-fb56-4cb1-846c-790c91498c6b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kpx2c" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.515510 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4140f15a-5e23-431b-ad69-a64d54325d19-registry-tls\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.515532 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g9vn\" (UniqueName: \"kubernetes.io/projected/f1aaae5a-1812-407f-bc93-79edf6ef6476-kube-api-access-7g9vn\") pod \"ingress-operator-5b745b69d9-qv9jt\" (UID: \"f1aaae5a-1812-407f-bc93-79edf6ef6476\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qv9jt" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.516362 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b3c578d-cefd-4be8-98dc-a646ebc2a3df-config\") pod \"machine-approver-56656f9798-5sd4x\" (UID: \"9b3c578d-cefd-4be8-98dc-a646ebc2a3df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5sd4x" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.516402 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bee93ff7-fb56-4cb1-846c-790c91498c6b-serving-cert\") pod \"authentication-operator-69f744f599-kpx2c\" (UID: \"bee93ff7-fb56-4cb1-846c-790c91498c6b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kpx2c" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.516589 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1e312683-699a-4ea1-9914-d9dc8b237cb4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-5t8fw\" (UID: \"1e312683-699a-4ea1-9914-d9dc8b237cb4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5t8fw" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.516671 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4140f15a-5e23-431b-ad69-a64d54325d19-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.516696 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9b3c578d-cefd-4be8-98dc-a646ebc2a3df-machine-approver-tls\") pod \"machine-approver-56656f9798-5sd4x\" (UID: \"9b3c578d-cefd-4be8-98dc-a646ebc2a3df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5sd4x" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.516719 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bd7570aa-f486-408b-b0c6-83e0903fa3e8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-v6tj4\" (UID: \"bd7570aa-f486-408b-b0c6-83e0903fa3e8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v6tj4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.516745 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/eab70863-0bea-4fb3-9265-045d0e2dff04-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5f5qf\" (UID: \"eab70863-0bea-4fb3-9265-045d0e2dff04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5f5qf" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.516803 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7pzz\" (UniqueName: \"kubernetes.io/projected/eab70863-0bea-4fb3-9265-045d0e2dff04-kube-api-access-n7pzz\") pod \"machine-api-operator-5694c8668f-5f5qf\" (UID: \"eab70863-0bea-4fb3-9265-045d0e2dff04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5f5qf" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.517916 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.518006 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4140f15a-5e23-431b-ad69-a64d54325d19-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.518033 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66tcl\" (UniqueName: \"kubernetes.io/projected/9b3c578d-cefd-4be8-98dc-a646ebc2a3df-kube-api-access-66tcl\") pod \"machine-approver-56656f9798-5sd4x\" (UID: \"9b3c578d-cefd-4be8-98dc-a646ebc2a3df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5sd4x" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.518059 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b3563ce-e872-4f8d-b605-a6962b979d53-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-rcr55\" (UID: \"0b3563ce-e872-4f8d-b605-a6962b979d53\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rcr55" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.518120 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k5jc\" (UniqueName: \"kubernetes.io/projected/4140f15a-5e23-431b-ad69-a64d54325d19-kube-api-access-9k5jc\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.518172 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4140f15a-5e23-431b-ad69-a64d54325d19-bound-sa-token\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.518201 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbqlk\" (UniqueName: \"kubernetes.io/projected/1e312683-699a-4ea1-9914-d9dc8b237cb4-kube-api-access-sbqlk\") pod \"cluster-samples-operator-665b6dd947-5t8fw\" (UID: \"1e312683-699a-4ea1-9914-d9dc8b237cb4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5t8fw" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.518229 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca13be73-3b56-4d7d-aaf4-547e7fbcec5f-config\") pod \"etcd-operator-b45778765-pnz94\" (UID: \"ca13be73-3b56-4d7d-aaf4-547e7fbcec5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pnz94" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.518248 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a94e46fc-5978-4d73-b811-463257b90e7c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2k8mm\" (UID: \"a94e46fc-5978-4d73-b811-463257b90e7c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2k8mm" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.518284 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0c57c83-4f36-4531-8f22-e3e37b49d843-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-q88wn\" (UID: \"d0c57c83-4f36-4531-8f22-e3e37b49d843\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q88wn" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.518314 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64s5r\" (UniqueName: \"kubernetes.io/projected/122ebfd5-3b50-40a3-929c-0751226c5253-kube-api-access-64s5r\") pod \"dns-operator-744455d44c-pjv6m\" (UID: \"122ebfd5-3b50-40a3-929c-0751226c5253\") " pod="openshift-dns-operator/dns-operator-744455d44c-pjv6m" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.518330 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a94e46fc-5978-4d73-b811-463257b90e7c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2k8mm\" (UID: \"a94e46fc-5978-4d73-b811-463257b90e7c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2k8mm" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.518347 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f1aaae5a-1812-407f-bc93-79edf6ef6476-metrics-tls\") pod \"ingress-operator-5b745b69d9-qv9jt\" (UID: \"f1aaae5a-1812-407f-bc93-79edf6ef6476\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qv9jt" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.518370 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1aaae5a-1812-407f-bc93-79edf6ef6476-bound-sa-token\") pod \"ingress-operator-5b745b69d9-qv9jt\" (UID: \"f1aaae5a-1812-407f-bc93-79edf6ef6476\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qv9jt" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.518390 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4140f15a-5e23-431b-ad69-a64d54325d19-trusted-ca\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.518408 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/eab70863-0bea-4fb3-9265-045d0e2dff04-images\") pod \"machine-api-operator-5694c8668f-5f5qf\" (UID: \"eab70863-0bea-4fb3-9265-045d0e2dff04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5f5qf" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.518427 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xrkp\" (UniqueName: \"kubernetes.io/projected/bee93ff7-fb56-4cb1-846c-790c91498c6b-kube-api-access-8xrkp\") pod \"authentication-operator-69f744f599-kpx2c\" (UID: \"bee93ff7-fb56-4cb1-846c-790c91498c6b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kpx2c" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.518461 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bee93ff7-fb56-4cb1-846c-790c91498c6b-config\") pod \"authentication-operator-69f744f599-kpx2c\" (UID: \"bee93ff7-fb56-4cb1-846c-790c91498c6b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kpx2c" Jan 22 10:26:51 crc kubenswrapper[4752]: E0122 10:26:51.518475 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:52.018458641 +0000 UTC m=+91.248401759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.518794 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/122ebfd5-3b50-40a3-929c-0751226c5253-metrics-tls\") pod \"dns-operator-744455d44c-pjv6m\" (UID: \"122ebfd5-3b50-40a3-929c-0751226c5253\") " pod="openshift-dns-operator/dns-operator-744455d44c-pjv6m" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.518817 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0c57c83-4f36-4531-8f22-e3e37b49d843-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-q88wn\" (UID: \"d0c57c83-4f36-4531-8f22-e3e37b49d843\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q88wn" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.518872 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1aaae5a-1812-407f-bc93-79edf6ef6476-trusted-ca\") pod \"ingress-operator-5b745b69d9-qv9jt\" (UID: \"f1aaae5a-1812-407f-bc93-79edf6ef6476\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qv9jt" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.519722 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.520030 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ca13be73-3b56-4d7d-aaf4-547e7fbcec5f-etcd-ca\") pod \"etcd-operator-b45778765-pnz94\" (UID: \"ca13be73-3b56-4d7d-aaf4-547e7fbcec5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pnz94" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.520081 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bee93ff7-fb56-4cb1-846c-790c91498c6b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kpx2c\" (UID: \"bee93ff7-fb56-4cb1-846c-790c91498c6b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kpx2c" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.520170 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57zw5\" (UniqueName: \"kubernetes.io/projected/bd7570aa-f486-408b-b0c6-83e0903fa3e8-kube-api-access-57zw5\") pod \"cluster-image-registry-operator-dc59b4c8b-v6tj4\" (UID: \"bd7570aa-f486-408b-b0c6-83e0903fa3e8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v6tj4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.520262 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd7570aa-f486-408b-b0c6-83e0903fa3e8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-v6tj4\" (UID: \"bd7570aa-f486-408b-b0c6-83e0903fa3e8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v6tj4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.520289 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd7570aa-f486-408b-b0c6-83e0903fa3e8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-v6tj4\" (UID: \"bd7570aa-f486-408b-b0c6-83e0903fa3e8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v6tj4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.520424 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwd2d\" (UniqueName: \"kubernetes.io/projected/a94e46fc-5978-4d73-b811-463257b90e7c-kube-api-access-wwd2d\") pod \"openshift-controller-manager-operator-756b6f6bc6-2k8mm\" (UID: \"a94e46fc-5978-4d73-b811-463257b90e7c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2k8mm" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.520844 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b3563ce-e872-4f8d-b605-a6962b979d53-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-rcr55\" (UID: \"0b3563ce-e872-4f8d-b605-a6962b979d53\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rcr55" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.521059 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b3563ce-e872-4f8d-b605-a6962b979d53-config\") pod \"kube-controller-manager-operator-78b949d7b-rcr55\" (UID: \"0b3563ce-e872-4f8d-b605-a6962b979d53\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rcr55" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.521157 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ca13be73-3b56-4d7d-aaf4-547e7fbcec5f-etcd-service-ca\") pod \"etcd-operator-b45778765-pnz94\" (UID: \"ca13be73-3b56-4d7d-aaf4-547e7fbcec5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pnz94" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.521191 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ca13be73-3b56-4d7d-aaf4-547e7fbcec5f-etcd-client\") pod \"etcd-operator-b45778765-pnz94\" (UID: \"ca13be73-3b56-4d7d-aaf4-547e7fbcec5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pnz94" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.521226 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eab70863-0bea-4fb3-9265-045d0e2dff04-config\") pod \"machine-api-operator-5694c8668f-5f5qf\" (UID: \"eab70863-0bea-4fb3-9265-045d0e2dff04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5f5qf" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.521509 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-oauth-serving-cert\") pod \"console-f9d7485db-jh6kp\" (UID: \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\") " pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:26:51 crc kubenswrapper[4752]: W0122 10:26:51.529267 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc82bf83f_1d94_41a2_ad18_2264806dd9ff.slice/crio-f7273e3e41e740468a28c65cea0dc1c137948fd320c682b1583937645b54274d WatchSource:0}: Error finding container f7273e3e41e740468a28c65cea0dc1c137948fd320c682b1583937645b54274d: Status 404 returned error can't find the container with id f7273e3e41e740468a28c65cea0dc1c137948fd320c682b1583937645b54274d Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.541306 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.542506 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4t27t"] Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.548440 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b82cc492-857e-4eaf-8e18-87e830bdc9f6-console-serving-cert\") pod \"console-f9d7485db-jh6kp\" (UID: \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\") " pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.558422 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.565307 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b82cc492-857e-4eaf-8e18-87e830bdc9f6-console-oauth-config\") pod \"console-f9d7485db-jh6kp\" (UID: \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\") " pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.586862 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.588532 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf"] Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.592197 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-trusted-ca-bundle\") pod \"console-f9d7485db-jh6kp\" (UID: \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\") " pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.598742 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 22 10:26:51 crc kubenswrapper[4752]: W0122 10:26:51.603966 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod619941ce_9ead_4926_b8c0_f9108cd58462.slice/crio-53c8192973cc347e10d857a7307c93f3c7f8644277bb9068e6522fefb890c575 WatchSource:0}: Error finding container 53c8192973cc347e10d857a7307c93f3c7f8644277bb9068e6522fefb890c575: Status 404 returned error can't find the container with id 53c8192973cc347e10d857a7307c93f3c7f8644277bb9068e6522fefb890c575 Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.621597 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.621761 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bd7570aa-f486-408b-b0c6-83e0903fa3e8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-v6tj4\" (UID: \"bd7570aa-f486-408b-b0c6-83e0903fa3e8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v6tj4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.621785 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7pzz\" (UniqueName: \"kubernetes.io/projected/eab70863-0bea-4fb3-9265-045d0e2dff04-kube-api-access-n7pzz\") pod \"machine-api-operator-5694c8668f-5f5qf\" (UID: \"eab70863-0bea-4fb3-9265-045d0e2dff04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5f5qf" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.621803 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9b3c578d-cefd-4be8-98dc-a646ebc2a3df-machine-approver-tls\") pod \"machine-approver-56656f9798-5sd4x\" (UID: \"9b3c578d-cefd-4be8-98dc-a646ebc2a3df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5sd4x" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.621830 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk55g\" (UniqueName: \"kubernetes.io/projected/124095a6-83e3-45da-8bed-6ed5a8f6892b-kube-api-access-rk55g\") pod \"router-default-5444994796-ldnv8\" (UID: \"124095a6-83e3-45da-8bed-6ed5a8f6892b\") " pod="openshift-ingress/router-default-5444994796-ldnv8" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.621846 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1312dce6-4901-499e-a380-fcf84c6126c4-proxy-tls\") pod \"machine-config-controller-84d6567774-2zchx\" (UID: \"1312dce6-4901-499e-a380-fcf84c6126c4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2zchx" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.621875 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4140f15a-5e23-431b-ad69-a64d54325d19-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.621909 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66tcl\" (UniqueName: \"kubernetes.io/projected/9b3c578d-cefd-4be8-98dc-a646ebc2a3df-kube-api-access-66tcl\") pod \"machine-approver-56656f9798-5sd4x\" (UID: \"9b3c578d-cefd-4be8-98dc-a646ebc2a3df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5sd4x" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.621928 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be9bc31d-0a4b-4060-b43f-2f563b9c03b8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c5w5z\" (UID: \"be9bc31d-0a4b-4060-b43f-2f563b9c03b8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c5w5z" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.621945 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/124095a6-83e3-45da-8bed-6ed5a8f6892b-stats-auth\") pod \"router-default-5444994796-ldnv8\" (UID: \"124095a6-83e3-45da-8bed-6ed5a8f6892b\") " pod="openshift-ingress/router-default-5444994796-ldnv8" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.621963 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/57c9b12f-5d5e-4c8c-95fd-b9f592d9c4f8-srv-cert\") pod \"catalog-operator-68c6474976-r25qn\" (UID: \"57c9b12f-5d5e-4c8c-95fd-b9f592d9c4f8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r25qn" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.621978 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b3563ce-e872-4f8d-b605-a6962b979d53-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-rcr55\" (UID: \"0b3563ce-e872-4f8d-b605-a6962b979d53\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rcr55" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.621994 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0797c211-fc73-476d-9dc2-383a5a9d1dcc-webhook-cert\") pod \"packageserver-d55dfcdfc-g6sx2\" (UID: \"0797c211-fc73-476d-9dc2-383a5a9d1dcc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6sx2" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.622008 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3d458e72-0fea-4998-9f96-b2c4c5427f39-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xkgxq\" (UID: \"3d458e72-0fea-4998-9f96-b2c4c5427f39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xkgxq" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.622028 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4140f15a-5e23-431b-ad69-a64d54325d19-bound-sa-token\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.622043 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5e9670cb-efb5-48a4-b621-b9da02d8afef-node-bootstrap-token\") pod \"machine-config-server-wm2bq\" (UID: \"5e9670cb-efb5-48a4-b621-b9da02d8afef\") " pod="openshift-machine-config-operator/machine-config-server-wm2bq" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.622058 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8667d5ed-56e8-42ad-86ad-63aa962e7c96-proxy-tls\") pod \"machine-config-operator-74547568cd-ll2c4\" (UID: \"8667d5ed-56e8-42ad-86ad-63aa962e7c96\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ll2c4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.622075 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c3114586-1658-4733-b9f6-7a6ebcf25b46-signing-key\") pod \"service-ca-9c57cc56f-nmbhc\" (UID: \"c3114586-1658-4733-b9f6-7a6ebcf25b46\") " pod="openshift-service-ca/service-ca-9c57cc56f-nmbhc" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.622097 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca13be73-3b56-4d7d-aaf4-547e7fbcec5f-config\") pod \"etcd-operator-b45778765-pnz94\" (UID: \"ca13be73-3b56-4d7d-aaf4-547e7fbcec5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pnz94" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.622116 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdmrl\" (UniqueName: \"kubernetes.io/projected/57c9b12f-5d5e-4c8c-95fd-b9f592d9c4f8-kube-api-access-tdmrl\") pod \"catalog-operator-68c6474976-r25qn\" (UID: \"57c9b12f-5d5e-4c8c-95fd-b9f592d9c4f8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r25qn" Jan 22 10:26:51 crc kubenswrapper[4752]: E0122 10:26:51.622136 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:52.122119477 +0000 UTC m=+91.352062385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.622156 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64s5r\" (UniqueName: \"kubernetes.io/projected/122ebfd5-3b50-40a3-929c-0751226c5253-kube-api-access-64s5r\") pod \"dns-operator-744455d44c-pjv6m\" (UID: \"122ebfd5-3b50-40a3-929c-0751226c5253\") " pod="openshift-dns-operator/dns-operator-744455d44c-pjv6m" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.622178 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8667d5ed-56e8-42ad-86ad-63aa962e7c96-images\") pod \"machine-config-operator-74547568cd-ll2c4\" (UID: \"8667d5ed-56e8-42ad-86ad-63aa962e7c96\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ll2c4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.622196 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a94e46fc-5978-4d73-b811-463257b90e7c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2k8mm\" (UID: \"a94e46fc-5978-4d73-b811-463257b90e7c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2k8mm" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.622215 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ac0bbe2-2764-4862-9b13-9940b8720dcf-config\") pod \"service-ca-operator-777779d784-wgf7v\" (UID: \"2ac0bbe2-2764-4862-9b13-9940b8720dcf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wgf7v" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.622234 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/eab70863-0bea-4fb3-9265-045d0e2dff04-images\") pod \"machine-api-operator-5694c8668f-5f5qf\" (UID: \"eab70863-0bea-4fb3-9265-045d0e2dff04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5f5qf" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.622253 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xrkp\" (UniqueName: \"kubernetes.io/projected/bee93ff7-fb56-4cb1-846c-790c91498c6b-kube-api-access-8xrkp\") pod \"authentication-operator-69f744f599-kpx2c\" (UID: \"bee93ff7-fb56-4cb1-846c-790c91498c6b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kpx2c" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.622460 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/122ebfd5-3b50-40a3-929c-0751226c5253-metrics-tls\") pod \"dns-operator-744455d44c-pjv6m\" (UID: \"122ebfd5-3b50-40a3-929c-0751226c5253\") " pod="openshift-dns-operator/dns-operator-744455d44c-pjv6m" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.622485 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0c57c83-4f36-4531-8f22-e3e37b49d843-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-q88wn\" (UID: \"d0c57c83-4f36-4531-8f22-e3e37b49d843\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q88wn" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.622946 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca13be73-3b56-4d7d-aaf4-547e7fbcec5f-config\") pod \"etcd-operator-b45778765-pnz94\" (UID: \"ca13be73-3b56-4d7d-aaf4-547e7fbcec5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pnz94" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.623675 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a94e46fc-5978-4d73-b811-463257b90e7c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2k8mm\" (UID: \"a94e46fc-5978-4d73-b811-463257b90e7c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2k8mm" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.624074 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/eab70863-0bea-4fb3-9265-045d0e2dff04-images\") pod \"machine-api-operator-5694c8668f-5f5qf\" (UID: \"eab70863-0bea-4fb3-9265-045d0e2dff04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5f5qf" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.624148 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrwn9\" (UniqueName: \"kubernetes.io/projected/73d4e76d-7187-4ef1-a6c7-f5d8fcbf6dce-kube-api-access-jrwn9\") pod \"multus-admission-controller-857f4d67dd-mhb4k\" (UID: \"73d4e76d-7187-4ef1-a6c7-f5d8fcbf6dce\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mhb4k" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.624168 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0c57c83-4f36-4531-8f22-e3e37b49d843-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-q88wn\" (UID: \"d0c57c83-4f36-4531-8f22-e3e37b49d843\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q88wn" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.624368 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2hm5\" (UniqueName: \"kubernetes.io/projected/2ed74970-560a-4f45-84e8-ebedcaf74392-kube-api-access-t2hm5\") pod \"collect-profiles-29484615-wmzmg\" (UID: \"2ed74970-560a-4f45-84e8-ebedcaf74392\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-wmzmg" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.624393 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4bba5d5d-5eca-4d85-a64d-e127c8fe0e98-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gx9kk\" (UID: \"4bba5d5d-5eca-4d85-a64d-e127c8fe0e98\") " pod="openshift-marketplace/marketplace-operator-79b997595-gx9kk" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.624412 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tgkf\" (UniqueName: \"kubernetes.io/projected/1312dce6-4901-499e-a380-fcf84c6126c4-kube-api-access-2tgkf\") pod \"machine-config-controller-84d6567774-2zchx\" (UID: \"1312dce6-4901-499e-a380-fcf84c6126c4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2zchx" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.624431 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0797c211-fc73-476d-9dc2-383a5a9d1dcc-tmpfs\") pod \"packageserver-d55dfcdfc-g6sx2\" (UID: \"0797c211-fc73-476d-9dc2-383a5a9d1dcc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6sx2" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.624450 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ca13be73-3b56-4d7d-aaf4-547e7fbcec5f-etcd-ca\") pod \"etcd-operator-b45778765-pnz94\" (UID: \"ca13be73-3b56-4d7d-aaf4-547e7fbcec5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pnz94" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.624477 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd7570aa-f486-408b-b0c6-83e0903fa3e8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-v6tj4\" (UID: \"bd7570aa-f486-408b-b0c6-83e0903fa3e8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v6tj4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.624494 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be9bc31d-0a4b-4060-b43f-2f563b9c03b8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c5w5z\" (UID: \"be9bc31d-0a4b-4060-b43f-2f563b9c03b8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c5w5z" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.624510 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/57c9b12f-5d5e-4c8c-95fd-b9f592d9c4f8-profile-collector-cert\") pod \"catalog-operator-68c6474976-r25qn\" (UID: \"57c9b12f-5d5e-4c8c-95fd-b9f592d9c4f8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r25qn" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.624533 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwjtq\" (UniqueName: \"kubernetes.io/projected/3d458e72-0fea-4998-9f96-b2c4c5427f39-kube-api-access-fwjtq\") pod \"olm-operator-6b444d44fb-xkgxq\" (UID: \"3d458e72-0fea-4998-9f96-b2c4c5427f39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xkgxq" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.624551 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b3563ce-e872-4f8d-b605-a6962b979d53-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-rcr55\" (UID: \"0b3563ce-e872-4f8d-b605-a6962b979d53\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rcr55" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.624567 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b3563ce-e872-4f8d-b605-a6962b979d53-config\") pod \"kube-controller-manager-operator-78b949d7b-rcr55\" (UID: \"0b3563ce-e872-4f8d-b605-a6962b979d53\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rcr55" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.624585 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9b3c578d-cefd-4be8-98dc-a646ebc2a3df-auth-proxy-config\") pod \"machine-approver-56656f9798-5sd4x\" (UID: \"9b3c578d-cefd-4be8-98dc-a646ebc2a3df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5sd4x" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.624600 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/124095a6-83e3-45da-8bed-6ed5a8f6892b-default-certificate\") pod \"router-default-5444994796-ldnv8\" (UID: \"124095a6-83e3-45da-8bed-6ed5a8f6892b\") " pod="openshift-ingress/router-default-5444994796-ldnv8" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.624625 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4bba5d5d-5eca-4d85-a64d-e127c8fe0e98-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gx9kk\" (UID: \"4bba5d5d-5eca-4d85-a64d-e127c8fe0e98\") " pod="openshift-marketplace/marketplace-operator-79b997595-gx9kk" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.624639 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wzpv\" (UniqueName: \"kubernetes.io/projected/8667d5ed-56e8-42ad-86ad-63aa962e7c96-kube-api-access-5wzpv\") pod \"machine-config-operator-74547568cd-ll2c4\" (UID: \"8667d5ed-56e8-42ad-86ad-63aa962e7c96\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ll2c4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.624652 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/124095a6-83e3-45da-8bed-6ed5a8f6892b-metrics-certs\") pod \"router-default-5444994796-ldnv8\" (UID: \"124095a6-83e3-45da-8bed-6ed5a8f6892b\") " pod="openshift-ingress/router-default-5444994796-ldnv8" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.624668 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4140f15a-5e23-431b-ad69-a64d54325d19-registry-certificates\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.624682 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z2kb\" (UniqueName: \"kubernetes.io/projected/b35e3ab6-301b-4b5d-80f6-ba8e1e301d60-kube-api-access-6z2kb\") pod \"ingress-canary-pt8rf\" (UID: \"b35e3ab6-301b-4b5d-80f6-ba8e1e301d60\") " pod="openshift-ingress-canary/ingress-canary-pt8rf" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.625421 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bee93ff7-fb56-4cb1-846c-790c91498c6b-service-ca-bundle\") pod \"authentication-operator-69f744f599-kpx2c\" (UID: \"bee93ff7-fb56-4cb1-846c-790c91498c6b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kpx2c" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.625456 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnlzx\" (UniqueName: \"kubernetes.io/projected/0797c211-fc73-476d-9dc2-383a5a9d1dcc-kube-api-access-mnlzx\") pod \"packageserver-d55dfcdfc-g6sx2\" (UID: \"0797c211-fc73-476d-9dc2-383a5a9d1dcc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6sx2" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.625473 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d458e72-0fea-4998-9f96-b2c4c5427f39-srv-cert\") pod \"olm-operator-6b444d44fb-xkgxq\" (UID: \"3d458e72-0fea-4998-9f96-b2c4c5427f39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xkgxq" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.625775 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9b3c578d-cefd-4be8-98dc-a646ebc2a3df-machine-approver-tls\") pod \"machine-approver-56656f9798-5sd4x\" (UID: \"9b3c578d-cefd-4be8-98dc-a646ebc2a3df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5sd4x" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.626080 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ca13be73-3b56-4d7d-aaf4-547e7fbcec5f-etcd-ca\") pod \"etcd-operator-b45778765-pnz94\" (UID: \"ca13be73-3b56-4d7d-aaf4-547e7fbcec5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pnz94" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.626608 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4140f15a-5e23-431b-ad69-a64d54325d19-registry-tls\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.626702 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g9vn\" (UniqueName: \"kubernetes.io/projected/f1aaae5a-1812-407f-bc93-79edf6ef6476-kube-api-access-7g9vn\") pod \"ingress-operator-5b745b69d9-qv9jt\" (UID: \"f1aaae5a-1812-407f-bc93-79edf6ef6476\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qv9jt" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.626793 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1312dce6-4901-499e-a380-fcf84c6126c4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2zchx\" (UID: \"1312dce6-4901-499e-a380-fcf84c6126c4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2zchx" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.627021 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1e312683-699a-4ea1-9914-d9dc8b237cb4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-5t8fw\" (UID: \"1e312683-699a-4ea1-9914-d9dc8b237cb4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5t8fw" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.627059 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfq6s\" (UniqueName: \"kubernetes.io/projected/c3114586-1658-4733-b9f6-7a6ebcf25b46-kube-api-access-wfq6s\") pod \"service-ca-9c57cc56f-nmbhc\" (UID: \"c3114586-1658-4733-b9f6-7a6ebcf25b46\") " pod="openshift-service-ca/service-ca-9c57cc56f-nmbhc" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.627330 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9b3c578d-cefd-4be8-98dc-a646ebc2a3df-auth-proxy-config\") pod \"machine-approver-56656f9798-5sd4x\" (UID: \"9b3c578d-cefd-4be8-98dc-a646ebc2a3df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5sd4x" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.627432 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/48336b51-2e64-4f1e-a96f-5f866900ba2a-mountpoint-dir\") pod \"csi-hostpathplugin-g5js4\" (UID: \"48336b51-2e64-4f1e-a96f-5f866900ba2a\") " pod="hostpath-provisioner/csi-hostpathplugin-g5js4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.627501 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4140f15a-5e23-431b-ad69-a64d54325d19-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.627532 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/eab70863-0bea-4fb3-9265-045d0e2dff04-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5f5qf\" (UID: \"eab70863-0bea-4fb3-9265-045d0e2dff04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5f5qf" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.627595 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k5jc\" (UniqueName: \"kubernetes.io/projected/4140f15a-5e23-431b-ad69-a64d54325d19-kube-api-access-9k5jc\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.627627 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbqlk\" (UniqueName: \"kubernetes.io/projected/1e312683-699a-4ea1-9914-d9dc8b237cb4-kube-api-access-sbqlk\") pod \"cluster-samples-operator-665b6dd947-5t8fw\" (UID: \"1e312683-699a-4ea1-9914-d9dc8b237cb4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5t8fw" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.627700 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5e9670cb-efb5-48a4-b621-b9da02d8afef-certs\") pod \"machine-config-server-wm2bq\" (UID: \"5e9670cb-efb5-48a4-b621-b9da02d8afef\") " pod="openshift-machine-config-operator/machine-config-server-wm2bq" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.627755 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rg7t\" (UniqueName: \"kubernetes.io/projected/5e9670cb-efb5-48a4-b621-b9da02d8afef-kube-api-access-7rg7t\") pod \"machine-config-server-wm2bq\" (UID: \"5e9670cb-efb5-48a4-b621-b9da02d8afef\") " pod="openshift-machine-config-operator/machine-config-server-wm2bq" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.627787 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a94e46fc-5978-4d73-b811-463257b90e7c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2k8mm\" (UID: \"a94e46fc-5978-4d73-b811-463257b90e7c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2k8mm" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.627833 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/48336b51-2e64-4f1e-a96f-5f866900ba2a-registration-dir\") pod \"csi-hostpathplugin-g5js4\" (UID: \"48336b51-2e64-4f1e-a96f-5f866900ba2a\") " pod="hostpath-provisioner/csi-hostpathplugin-g5js4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.627890 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5l26\" (UniqueName: \"kubernetes.io/projected/48336b51-2e64-4f1e-a96f-5f866900ba2a-kube-api-access-d5l26\") pod \"csi-hostpathplugin-g5js4\" (UID: \"48336b51-2e64-4f1e-a96f-5f866900ba2a\") " pod="hostpath-provisioner/csi-hostpathplugin-g5js4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.627923 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0c57c83-4f36-4531-8f22-e3e37b49d843-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-q88wn\" (UID: \"d0c57c83-4f36-4531-8f22-e3e37b49d843\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q88wn" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.627976 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8667d5ed-56e8-42ad-86ad-63aa962e7c96-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ll2c4\" (UID: \"8667d5ed-56e8-42ad-86ad-63aa962e7c96\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ll2c4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.628000 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/122ebfd5-3b50-40a3-929c-0751226c5253-metrics-tls\") pod \"dns-operator-744455d44c-pjv6m\" (UID: \"122ebfd5-3b50-40a3-929c-0751226c5253\") " pod="openshift-dns-operator/dns-operator-744455d44c-pjv6m" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.628012 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f1aaae5a-1812-407f-bc93-79edf6ef6476-metrics-tls\") pod \"ingress-operator-5b745b69d9-qv9jt\" (UID: \"f1aaae5a-1812-407f-bc93-79edf6ef6476\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qv9jt" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.628067 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1aaae5a-1812-407f-bc93-79edf6ef6476-bound-sa-token\") pod \"ingress-operator-5b745b69d9-qv9jt\" (UID: \"f1aaae5a-1812-407f-bc93-79edf6ef6476\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qv9jt" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.628098 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4140f15a-5e23-431b-ad69-a64d54325d19-trusted-ca\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.628156 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bee93ff7-fb56-4cb1-846c-790c91498c6b-config\") pod \"authentication-operator-69f744f599-kpx2c\" (UID: \"bee93ff7-fb56-4cb1-846c-790c91498c6b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kpx2c" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.628184 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0797c211-fc73-476d-9dc2-383a5a9d1dcc-apiservice-cert\") pod \"packageserver-d55dfcdfc-g6sx2\" (UID: \"0797c211-fc73-476d-9dc2-383a5a9d1dcc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6sx2" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.628835 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b3563ce-e872-4f8d-b605-a6962b979d53-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-rcr55\" (UID: \"0b3563ce-e872-4f8d-b605-a6962b979d53\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rcr55" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.629082 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd7570aa-f486-408b-b0c6-83e0903fa3e8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-v6tj4\" (UID: \"bd7570aa-f486-408b-b0c6-83e0903fa3e8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v6tj4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.628241 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69dhq\" (UniqueName: \"kubernetes.io/projected/f29c0509-af02-4b97-981a-cc0f24848953-kube-api-access-69dhq\") pod \"package-server-manager-789f6589d5-hrggg\" (UID: \"f29c0509-af02-4b97-981a-cc0f24848953\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hrggg" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.629578 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jrtp\" (UniqueName: \"kubernetes.io/projected/f6c2b2d2-d728-4416-aebd-a4ce23716f41-kube-api-access-2jrtp\") pod \"migrator-59844c95c7-djs29\" (UID: \"f6c2b2d2-d728-4416-aebd-a4ce23716f41\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-djs29" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.629610 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cb8be535-82d3-4a30-b6aa-45058a58f30e-metrics-tls\") pod \"dns-default-tbzz8\" (UID: \"cb8be535-82d3-4a30-b6aa-45058a58f30e\") " pod="openshift-dns/dns-default-tbzz8" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.629638 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1aaae5a-1812-407f-bc93-79edf6ef6476-trusted-ca\") pod \"ingress-operator-5b745b69d9-qv9jt\" (UID: \"f1aaae5a-1812-407f-bc93-79edf6ef6476\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qv9jt" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.629679 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/48336b51-2e64-4f1e-a96f-5f866900ba2a-csi-data-dir\") pod \"csi-hostpathplugin-g5js4\" (UID: \"48336b51-2e64-4f1e-a96f-5f866900ba2a\") " pod="hostpath-provisioner/csi-hostpathplugin-g5js4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.629703 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/73d4e76d-7187-4ef1-a6c7-f5d8fcbf6dce-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mhb4k\" (UID: \"73d4e76d-7187-4ef1-a6c7-f5d8fcbf6dce\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mhb4k" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.629724 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/48336b51-2e64-4f1e-a96f-5f866900ba2a-plugins-dir\") pod \"csi-hostpathplugin-g5js4\" (UID: \"48336b51-2e64-4f1e-a96f-5f866900ba2a\") " pod="hostpath-provisioner/csi-hostpathplugin-g5js4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.629779 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bee93ff7-fb56-4cb1-846c-790c91498c6b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kpx2c\" (UID: \"bee93ff7-fb56-4cb1-846c-790c91498c6b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kpx2c" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.629801 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2ed74970-560a-4f45-84e8-ebedcaf74392-secret-volume\") pod \"collect-profiles-29484615-wmzmg\" (UID: \"2ed74970-560a-4f45-84e8-ebedcaf74392\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-wmzmg" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.629828 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f29c0509-af02-4b97-981a-cc0f24848953-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hrggg\" (UID: \"f29c0509-af02-4b97-981a-cc0f24848953\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hrggg" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.629882 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57zw5\" (UniqueName: \"kubernetes.io/projected/bd7570aa-f486-408b-b0c6-83e0903fa3e8-kube-api-access-57zw5\") pod \"cluster-image-registry-operator-dc59b4c8b-v6tj4\" (UID: \"bd7570aa-f486-408b-b0c6-83e0903fa3e8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v6tj4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.629905 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/124095a6-83e3-45da-8bed-6ed5a8f6892b-service-ca-bundle\") pod \"router-default-5444994796-ldnv8\" (UID: \"124095a6-83e3-45da-8bed-6ed5a8f6892b\") " pod="openshift-ingress/router-default-5444994796-ldnv8" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.629932 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd7570aa-f486-408b-b0c6-83e0903fa3e8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-v6tj4\" (UID: \"bd7570aa-f486-408b-b0c6-83e0903fa3e8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v6tj4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.629955 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwd2d\" (UniqueName: \"kubernetes.io/projected/a94e46fc-5978-4d73-b811-463257b90e7c-kube-api-access-wwd2d\") pod \"openshift-controller-manager-operator-756b6f6bc6-2k8mm\" (UID: \"a94e46fc-5978-4d73-b811-463257b90e7c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2k8mm" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.629979 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw98d\" (UniqueName: \"kubernetes.io/projected/cb8be535-82d3-4a30-b6aa-45058a58f30e-kube-api-access-mw98d\") pod \"dns-default-tbzz8\" (UID: \"cb8be535-82d3-4a30-b6aa-45058a58f30e\") " pod="openshift-dns/dns-default-tbzz8" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.630004 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ca13be73-3b56-4d7d-aaf4-547e7fbcec5f-etcd-service-ca\") pod \"etcd-operator-b45778765-pnz94\" (UID: \"ca13be73-3b56-4d7d-aaf4-547e7fbcec5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pnz94" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.630026 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ca13be73-3b56-4d7d-aaf4-547e7fbcec5f-etcd-client\") pod \"etcd-operator-b45778765-pnz94\" (UID: \"ca13be73-3b56-4d7d-aaf4-547e7fbcec5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pnz94" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.630048 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eab70863-0bea-4fb3-9265-045d0e2dff04-config\") pod \"machine-api-operator-5694c8668f-5f5qf\" (UID: \"eab70863-0bea-4fb3-9265-045d0e2dff04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5f5qf" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.630071 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb8be535-82d3-4a30-b6aa-45058a58f30e-config-volume\") pod \"dns-default-tbzz8\" (UID: \"cb8be535-82d3-4a30-b6aa-45058a58f30e\") " pod="openshift-dns/dns-default-tbzz8" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.630095 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b35e3ab6-301b-4b5d-80f6-ba8e1e301d60-cert\") pod \"ingress-canary-pt8rf\" (UID: \"b35e3ab6-301b-4b5d-80f6-ba8e1e301d60\") " pod="openshift-ingress-canary/ingress-canary-pt8rf" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.630121 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr2pt\" (UniqueName: \"kubernetes.io/projected/d0c57c83-4f36-4531-8f22-e3e37b49d843-kube-api-access-gr2pt\") pod \"kube-storage-version-migrator-operator-b67b599dd-q88wn\" (UID: \"d0c57c83-4f36-4531-8f22-e3e37b49d843\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q88wn" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.630144 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be9bc31d-0a4b-4060-b43f-2f563b9c03b8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c5w5z\" (UID: \"be9bc31d-0a4b-4060-b43f-2f563b9c03b8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c5w5z" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.630169 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c3114586-1658-4733-b9f6-7a6ebcf25b46-signing-cabundle\") pod \"service-ca-9c57cc56f-nmbhc\" (UID: \"c3114586-1658-4733-b9f6-7a6ebcf25b46\") " pod="openshift-service-ca/service-ca-9c57cc56f-nmbhc" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.630190 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25npf\" (UniqueName: \"kubernetes.io/projected/4bba5d5d-5eca-4d85-a64d-e127c8fe0e98-kube-api-access-25npf\") pod \"marketplace-operator-79b997595-gx9kk\" (UID: \"4bba5d5d-5eca-4d85-a64d-e127c8fe0e98\") " pod="openshift-marketplace/marketplace-operator-79b997595-gx9kk" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.630215 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cms42\" (UniqueName: \"kubernetes.io/projected/2ac0bbe2-2764-4862-9b13-9940b8720dcf-kube-api-access-cms42\") pod \"service-ca-operator-777779d784-wgf7v\" (UID: \"2ac0bbe2-2764-4862-9b13-9940b8720dcf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wgf7v" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.630240 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca13be73-3b56-4d7d-aaf4-547e7fbcec5f-serving-cert\") pod \"etcd-operator-b45778765-pnz94\" (UID: \"ca13be73-3b56-4d7d-aaf4-547e7fbcec5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pnz94" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.630265 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ac0bbe2-2764-4862-9b13-9940b8720dcf-serving-cert\") pod \"service-ca-operator-777779d784-wgf7v\" (UID: \"2ac0bbe2-2764-4862-9b13-9940b8720dcf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wgf7v" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.630286 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ed74970-560a-4f45-84e8-ebedcaf74392-config-volume\") pod \"collect-profiles-29484615-wmzmg\" (UID: \"2ed74970-560a-4f45-84e8-ebedcaf74392\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-wmzmg" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.630312 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w848\" (UniqueName: \"kubernetes.io/projected/ca13be73-3b56-4d7d-aaf4-547e7fbcec5f-kube-api-access-5w848\") pod \"etcd-operator-b45778765-pnz94\" (UID: \"ca13be73-3b56-4d7d-aaf4-547e7fbcec5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pnz94" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.630337 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b3c578d-cefd-4be8-98dc-a646ebc2a3df-config\") pod \"machine-approver-56656f9798-5sd4x\" (UID: \"9b3c578d-cefd-4be8-98dc-a646ebc2a3df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5sd4x" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.630357 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/48336b51-2e64-4f1e-a96f-5f866900ba2a-socket-dir\") pod \"csi-hostpathplugin-g5js4\" (UID: \"48336b51-2e64-4f1e-a96f-5f866900ba2a\") " pod="hostpath-provisioner/csi-hostpathplugin-g5js4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.630380 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bee93ff7-fb56-4cb1-846c-790c91498c6b-serving-cert\") pod \"authentication-operator-69f744f599-kpx2c\" (UID: \"bee93ff7-fb56-4cb1-846c-790c91498c6b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kpx2c" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.630731 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1e312683-699a-4ea1-9914-d9dc8b237cb4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-5t8fw\" (UID: \"1e312683-699a-4ea1-9914-d9dc8b237cb4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5t8fw" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.631225 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/eab70863-0bea-4fb3-9265-045d0e2dff04-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5f5qf\" (UID: \"eab70863-0bea-4fb3-9265-045d0e2dff04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5f5qf" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.631772 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ca13be73-3b56-4d7d-aaf4-547e7fbcec5f-etcd-service-ca\") pod \"etcd-operator-b45778765-pnz94\" (UID: \"ca13be73-3b56-4d7d-aaf4-547e7fbcec5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pnz94" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.632298 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0c57c83-4f36-4531-8f22-e3e37b49d843-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-q88wn\" (UID: \"d0c57c83-4f36-4531-8f22-e3e37b49d843\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q88wn" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.633170 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bee93ff7-fb56-4cb1-846c-790c91498c6b-service-ca-bundle\") pod \"authentication-operator-69f744f599-kpx2c\" (UID: \"bee93ff7-fb56-4cb1-846c-790c91498c6b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kpx2c" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.633175 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b3563ce-e872-4f8d-b605-a6962b979d53-config\") pod \"kube-controller-manager-operator-78b949d7b-rcr55\" (UID: \"0b3563ce-e872-4f8d-b605-a6962b979d53\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rcr55" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.633772 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4140f15a-5e23-431b-ad69-a64d54325d19-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.634366 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4140f15a-5e23-431b-ad69-a64d54325d19-registry-certificates\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.635230 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b3c578d-cefd-4be8-98dc-a646ebc2a3df-config\") pod \"machine-approver-56656f9798-5sd4x\" (UID: \"9b3c578d-cefd-4be8-98dc-a646ebc2a3df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5sd4x" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.635263 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4140f15a-5e23-431b-ad69-a64d54325d19-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.635327 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4140f15a-5e23-431b-ad69-a64d54325d19-trusted-ca\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.635594 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eab70863-0bea-4fb3-9265-045d0e2dff04-config\") pod \"machine-api-operator-5694c8668f-5f5qf\" (UID: \"eab70863-0bea-4fb3-9265-045d0e2dff04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5f5qf" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.635781 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1aaae5a-1812-407f-bc93-79edf6ef6476-trusted-ca\") pod \"ingress-operator-5b745b69d9-qv9jt\" (UID: \"f1aaae5a-1812-407f-bc93-79edf6ef6476\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qv9jt" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.636155 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4140f15a-5e23-431b-ad69-a64d54325d19-registry-tls\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.639044 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd7570aa-f486-408b-b0c6-83e0903fa3e8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-v6tj4\" (UID: \"bd7570aa-f486-408b-b0c6-83e0903fa3e8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v6tj4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.639167 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-console-config\") pod \"console-f9d7485db-jh6kp\" (UID: \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\") " pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.640429 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a94e46fc-5978-4d73-b811-463257b90e7c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2k8mm\" (UID: \"a94e46fc-5978-4d73-b811-463257b90e7c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2k8mm" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.640597 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bee93ff7-fb56-4cb1-846c-790c91498c6b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kpx2c\" (UID: \"bee93ff7-fb56-4cb1-846c-790c91498c6b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kpx2c" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.640962 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ca13be73-3b56-4d7d-aaf4-547e7fbcec5f-etcd-client\") pod \"etcd-operator-b45778765-pnz94\" (UID: \"ca13be73-3b56-4d7d-aaf4-547e7fbcec5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pnz94" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.641232 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bee93ff7-fb56-4cb1-846c-790c91498c6b-config\") pod \"authentication-operator-69f744f599-kpx2c\" (UID: \"bee93ff7-fb56-4cb1-846c-790c91498c6b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kpx2c" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.643234 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca13be73-3b56-4d7d-aaf4-547e7fbcec5f-serving-cert\") pod \"etcd-operator-b45778765-pnz94\" (UID: \"ca13be73-3b56-4d7d-aaf4-547e7fbcec5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pnz94" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.643727 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f1aaae5a-1812-407f-bc93-79edf6ef6476-metrics-tls\") pod \"ingress-operator-5b745b69d9-qv9jt\" (UID: \"f1aaae5a-1812-407f-bc93-79edf6ef6476\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qv9jt" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.647394 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bee93ff7-fb56-4cb1-846c-790c91498c6b-serving-cert\") pod \"authentication-operator-69f744f599-kpx2c\" (UID: \"bee93ff7-fb56-4cb1-846c-790c91498c6b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kpx2c" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.657430 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wnsq8" event={"ID":"fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d","Type":"ContainerStarted","Data":"4187d4e1fb622485692793f77e6449dbf898365af2feaed629419f5cd16a3d48"} Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.657473 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wnsq8" event={"ID":"fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d","Type":"ContainerStarted","Data":"d15035fdcb7027328fcbc6342611d2706887e7c04f2a9eeaf902294558e344b8"} Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.658330 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-wnsq8" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.659112 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zq8bp" event={"ID":"0d174189-03c1-40c5-9304-44f925f565c7","Type":"ContainerStarted","Data":"95176b7ded29405d172c8a06267d0dd2c8efd490355ffe0dc1192000133ffefe"} Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.660867 4752 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-wnsq8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.660906 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-wnsq8" podUID="fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.661709 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf" event={"ID":"619941ce-9ead-4926-b8c0-f9108cd58462","Type":"ContainerStarted","Data":"53c8192973cc347e10d857a7307c93f3c7f8644277bb9068e6522fefb890c575"} Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.664147 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2ct5b" event={"ID":"5b26aa16-ebd4-47e8-bc74-c4e5185df358","Type":"ContainerStarted","Data":"ca7c800bec26cc98d952f41800cd554cf0f985d28b6b4b0898ac80395c59d5e3"} Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.664407 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2ct5b" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.666421 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psf87" event={"ID":"5956ca03-413a-4077-9c54-5cd45f278f0f","Type":"ContainerStarted","Data":"52243a4873b46cbd6a4afe88614f0f8568c14d5de4311231e634ad59892ddd90"} Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.666456 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psf87" event={"ID":"5956ca03-413a-4077-9c54-5cd45f278f0f","Type":"ContainerStarted","Data":"63acb939073c2499705f528557f7c4ad2aa5a0dce3c3be2f597cc21d8b10753a"} Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.669577 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" event={"ID":"c82bf83f-1d94-41a2-ad18-2264806dd9ff","Type":"ContainerStarted","Data":"f7273e3e41e740468a28c65cea0dc1c137948fd320c682b1583937645b54274d"} Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.671378 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bmh84" event={"ID":"5e213e66-9429-41a1-9b53-476794092c7f","Type":"ContainerStarted","Data":"dfc80f2755588f0f26abc236ec1847b78c20d53ec87fdd4e1f3cfe468b498e33"} Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.673029 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lp2rk" event={"ID":"4c4d04e4-638e-4b88-a629-951d94c6b23e","Type":"ContainerStarted","Data":"babb113acf4115437f06bd3120e9275c63181638c92a431ca88e36fa60ab76e8"} Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.673066 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lp2rk" event={"ID":"4c4d04e4-638e-4b88-a629-951d94c6b23e","Type":"ContainerStarted","Data":"81df44b3296b472e8f6ffedcbeaa23bf4bdc2f927e5f006fd2d9ea2b43f62234"} Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.673468 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7pzz\" (UniqueName: \"kubernetes.io/projected/eab70863-0bea-4fb3-9265-045d0e2dff04-kube-api-access-n7pzz\") pod \"machine-api-operator-5694c8668f-5f5qf\" (UID: \"eab70863-0bea-4fb3-9265-045d0e2dff04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5f5qf" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.675667 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4t27t" event={"ID":"622b1b03-6c1d-460c-ac51-10046c682195","Type":"ContainerStarted","Data":"0532f52fe3fd9674f50916d0bc0e1da3f9a78bf465d53c424e67d889474cd025"} Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.677045 4752 generic.go:334] "Generic (PLEG): container finished" podID="315cb527-e73b-4e1f-bc55-09c5c694cef9" containerID="5f660e9cac36b8be91b05f1afe9585971888f190e620269c821990198566fbe0" exitCode=0 Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.678194 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" event={"ID":"315cb527-e73b-4e1f-bc55-09c5c694cef9","Type":"ContainerDied","Data":"5f660e9cac36b8be91b05f1afe9585971888f190e620269c821990198566fbe0"} Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.678217 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" event={"ID":"315cb527-e73b-4e1f-bc55-09c5c694cef9","Type":"ContainerStarted","Data":"6d02173952fc65533fd90b23cbc8203f113c469b4c3f5556fd26c1dc50242d7e"} Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.684601 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-g476l" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.699052 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b3563ce-e872-4f8d-b605-a6962b979d53-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-rcr55\" (UID: \"0b3563ce-e872-4f8d-b605-a6962b979d53\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rcr55" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.720541 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bd7570aa-f486-408b-b0c6-83e0903fa3e8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-v6tj4\" (UID: \"bd7570aa-f486-408b-b0c6-83e0903fa3e8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v6tj4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.731403 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c3114586-1658-4733-b9f6-7a6ebcf25b46-signing-key\") pod \"service-ca-9c57cc56f-nmbhc\" (UID: \"c3114586-1658-4733-b9f6-7a6ebcf25b46\") " pod="openshift-service-ca/service-ca-9c57cc56f-nmbhc" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.731757 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5e9670cb-efb5-48a4-b621-b9da02d8afef-node-bootstrap-token\") pod \"machine-config-server-wm2bq\" (UID: \"5e9670cb-efb5-48a4-b621-b9da02d8afef\") " pod="openshift-machine-config-operator/machine-config-server-wm2bq" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.731781 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8667d5ed-56e8-42ad-86ad-63aa962e7c96-proxy-tls\") pod \"machine-config-operator-74547568cd-ll2c4\" (UID: \"8667d5ed-56e8-42ad-86ad-63aa962e7c96\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ll2c4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.731809 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdmrl\" (UniqueName: \"kubernetes.io/projected/57c9b12f-5d5e-4c8c-95fd-b9f592d9c4f8-kube-api-access-tdmrl\") pod \"catalog-operator-68c6474976-r25qn\" (UID: \"57c9b12f-5d5e-4c8c-95fd-b9f592d9c4f8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r25qn" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.731892 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8667d5ed-56e8-42ad-86ad-63aa962e7c96-images\") pod \"machine-config-operator-74547568cd-ll2c4\" (UID: \"8667d5ed-56e8-42ad-86ad-63aa962e7c96\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ll2c4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.731922 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ac0bbe2-2764-4862-9b13-9940b8720dcf-config\") pod \"service-ca-operator-777779d784-wgf7v\" (UID: \"2ac0bbe2-2764-4862-9b13-9940b8720dcf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wgf7v" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.731961 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrwn9\" (UniqueName: \"kubernetes.io/projected/73d4e76d-7187-4ef1-a6c7-f5d8fcbf6dce-kube-api-access-jrwn9\") pod \"multus-admission-controller-857f4d67dd-mhb4k\" (UID: \"73d4e76d-7187-4ef1-a6c7-f5d8fcbf6dce\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mhb4k" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.731985 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0797c211-fc73-476d-9dc2-383a5a9d1dcc-tmpfs\") pod \"packageserver-d55dfcdfc-g6sx2\" (UID: \"0797c211-fc73-476d-9dc2-383a5a9d1dcc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6sx2" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732010 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2hm5\" (UniqueName: \"kubernetes.io/projected/2ed74970-560a-4f45-84e8-ebedcaf74392-kube-api-access-t2hm5\") pod \"collect-profiles-29484615-wmzmg\" (UID: \"2ed74970-560a-4f45-84e8-ebedcaf74392\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-wmzmg" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732032 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4bba5d5d-5eca-4d85-a64d-e127c8fe0e98-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gx9kk\" (UID: \"4bba5d5d-5eca-4d85-a64d-e127c8fe0e98\") " pod="openshift-marketplace/marketplace-operator-79b997595-gx9kk" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732076 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tgkf\" (UniqueName: \"kubernetes.io/projected/1312dce6-4901-499e-a380-fcf84c6126c4-kube-api-access-2tgkf\") pod \"machine-config-controller-84d6567774-2zchx\" (UID: \"1312dce6-4901-499e-a380-fcf84c6126c4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2zchx" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732112 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/57c9b12f-5d5e-4c8c-95fd-b9f592d9c4f8-profile-collector-cert\") pod \"catalog-operator-68c6474976-r25qn\" (UID: \"57c9b12f-5d5e-4c8c-95fd-b9f592d9c4f8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r25qn" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732135 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be9bc31d-0a4b-4060-b43f-2f563b9c03b8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c5w5z\" (UID: \"be9bc31d-0a4b-4060-b43f-2f563b9c03b8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c5w5z" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732156 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwjtq\" (UniqueName: \"kubernetes.io/projected/3d458e72-0fea-4998-9f96-b2c4c5427f39-kube-api-access-fwjtq\") pod \"olm-operator-6b444d44fb-xkgxq\" (UID: \"3d458e72-0fea-4998-9f96-b2c4c5427f39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xkgxq" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732188 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/124095a6-83e3-45da-8bed-6ed5a8f6892b-default-certificate\") pod \"router-default-5444994796-ldnv8\" (UID: \"124095a6-83e3-45da-8bed-6ed5a8f6892b\") " pod="openshift-ingress/router-default-5444994796-ldnv8" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732213 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wzpv\" (UniqueName: \"kubernetes.io/projected/8667d5ed-56e8-42ad-86ad-63aa962e7c96-kube-api-access-5wzpv\") pod \"machine-config-operator-74547568cd-ll2c4\" (UID: \"8667d5ed-56e8-42ad-86ad-63aa962e7c96\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ll2c4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732234 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/124095a6-83e3-45da-8bed-6ed5a8f6892b-metrics-certs\") pod \"router-default-5444994796-ldnv8\" (UID: \"124095a6-83e3-45da-8bed-6ed5a8f6892b\") " pod="openshift-ingress/router-default-5444994796-ldnv8" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732256 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4bba5d5d-5eca-4d85-a64d-e127c8fe0e98-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gx9kk\" (UID: \"4bba5d5d-5eca-4d85-a64d-e127c8fe0e98\") " pod="openshift-marketplace/marketplace-operator-79b997595-gx9kk" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732278 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z2kb\" (UniqueName: \"kubernetes.io/projected/b35e3ab6-301b-4b5d-80f6-ba8e1e301d60-kube-api-access-6z2kb\") pod \"ingress-canary-pt8rf\" (UID: \"b35e3ab6-301b-4b5d-80f6-ba8e1e301d60\") " pod="openshift-ingress-canary/ingress-canary-pt8rf" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732302 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d458e72-0fea-4998-9f96-b2c4c5427f39-srv-cert\") pod \"olm-operator-6b444d44fb-xkgxq\" (UID: \"3d458e72-0fea-4998-9f96-b2c4c5427f39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xkgxq" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732328 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnlzx\" (UniqueName: \"kubernetes.io/projected/0797c211-fc73-476d-9dc2-383a5a9d1dcc-kube-api-access-mnlzx\") pod \"packageserver-d55dfcdfc-g6sx2\" (UID: \"0797c211-fc73-476d-9dc2-383a5a9d1dcc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6sx2" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732359 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1312dce6-4901-499e-a380-fcf84c6126c4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2zchx\" (UID: \"1312dce6-4901-499e-a380-fcf84c6126c4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2zchx" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732420 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfq6s\" (UniqueName: \"kubernetes.io/projected/c3114586-1658-4733-b9f6-7a6ebcf25b46-kube-api-access-wfq6s\") pod \"service-ca-9c57cc56f-nmbhc\" (UID: \"c3114586-1658-4733-b9f6-7a6ebcf25b46\") " pod="openshift-service-ca/service-ca-9c57cc56f-nmbhc" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732447 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/48336b51-2e64-4f1e-a96f-5f866900ba2a-mountpoint-dir\") pod \"csi-hostpathplugin-g5js4\" (UID: \"48336b51-2e64-4f1e-a96f-5f866900ba2a\") " pod="hostpath-provisioner/csi-hostpathplugin-g5js4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732493 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5e9670cb-efb5-48a4-b621-b9da02d8afef-certs\") pod \"machine-config-server-wm2bq\" (UID: \"5e9670cb-efb5-48a4-b621-b9da02d8afef\") " pod="openshift-machine-config-operator/machine-config-server-wm2bq" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732519 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rg7t\" (UniqueName: \"kubernetes.io/projected/5e9670cb-efb5-48a4-b621-b9da02d8afef-kube-api-access-7rg7t\") pod \"machine-config-server-wm2bq\" (UID: \"5e9670cb-efb5-48a4-b621-b9da02d8afef\") " pod="openshift-machine-config-operator/machine-config-server-wm2bq" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732545 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/48336b51-2e64-4f1e-a96f-5f866900ba2a-registration-dir\") pod \"csi-hostpathplugin-g5js4\" (UID: \"48336b51-2e64-4f1e-a96f-5f866900ba2a\") " pod="hostpath-provisioner/csi-hostpathplugin-g5js4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732569 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5l26\" (UniqueName: \"kubernetes.io/projected/48336b51-2e64-4f1e-a96f-5f866900ba2a-kube-api-access-d5l26\") pod \"csi-hostpathplugin-g5js4\" (UID: \"48336b51-2e64-4f1e-a96f-5f866900ba2a\") " pod="hostpath-provisioner/csi-hostpathplugin-g5js4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732594 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8667d5ed-56e8-42ad-86ad-63aa962e7c96-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ll2c4\" (UID: \"8667d5ed-56e8-42ad-86ad-63aa962e7c96\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ll2c4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732624 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0797c211-fc73-476d-9dc2-383a5a9d1dcc-apiservice-cert\") pod \"packageserver-d55dfcdfc-g6sx2\" (UID: \"0797c211-fc73-476d-9dc2-383a5a9d1dcc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6sx2" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732651 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69dhq\" (UniqueName: \"kubernetes.io/projected/f29c0509-af02-4b97-981a-cc0f24848953-kube-api-access-69dhq\") pod \"package-server-manager-789f6589d5-hrggg\" (UID: \"f29c0509-af02-4b97-981a-cc0f24848953\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hrggg" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732676 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jrtp\" (UniqueName: \"kubernetes.io/projected/f6c2b2d2-d728-4416-aebd-a4ce23716f41-kube-api-access-2jrtp\") pod \"migrator-59844c95c7-djs29\" (UID: \"f6c2b2d2-d728-4416-aebd-a4ce23716f41\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-djs29" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732697 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cb8be535-82d3-4a30-b6aa-45058a58f30e-metrics-tls\") pod \"dns-default-tbzz8\" (UID: \"cb8be535-82d3-4a30-b6aa-45058a58f30e\") " pod="openshift-dns/dns-default-tbzz8" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732718 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/48336b51-2e64-4f1e-a96f-5f866900ba2a-csi-data-dir\") pod \"csi-hostpathplugin-g5js4\" (UID: \"48336b51-2e64-4f1e-a96f-5f866900ba2a\") " pod="hostpath-provisioner/csi-hostpathplugin-g5js4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732738 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/48336b51-2e64-4f1e-a96f-5f866900ba2a-plugins-dir\") pod \"csi-hostpathplugin-g5js4\" (UID: \"48336b51-2e64-4f1e-a96f-5f866900ba2a\") " pod="hostpath-provisioner/csi-hostpathplugin-g5js4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732760 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/73d4e76d-7187-4ef1-a6c7-f5d8fcbf6dce-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mhb4k\" (UID: \"73d4e76d-7187-4ef1-a6c7-f5d8fcbf6dce\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mhb4k" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732784 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f29c0509-af02-4b97-981a-cc0f24848953-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hrggg\" (UID: \"f29c0509-af02-4b97-981a-cc0f24848953\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hrggg" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732810 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2ed74970-560a-4f45-84e8-ebedcaf74392-secret-volume\") pod \"collect-profiles-29484615-wmzmg\" (UID: \"2ed74970-560a-4f45-84e8-ebedcaf74392\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-wmzmg" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732838 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/124095a6-83e3-45da-8bed-6ed5a8f6892b-service-ca-bundle\") pod \"router-default-5444994796-ldnv8\" (UID: \"124095a6-83e3-45da-8bed-6ed5a8f6892b\") " pod="openshift-ingress/router-default-5444994796-ldnv8" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732897 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw98d\" (UniqueName: \"kubernetes.io/projected/cb8be535-82d3-4a30-b6aa-45058a58f30e-kube-api-access-mw98d\") pod \"dns-default-tbzz8\" (UID: \"cb8be535-82d3-4a30-b6aa-45058a58f30e\") " pod="openshift-dns/dns-default-tbzz8" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732919 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb8be535-82d3-4a30-b6aa-45058a58f30e-config-volume\") pod \"dns-default-tbzz8\" (UID: \"cb8be535-82d3-4a30-b6aa-45058a58f30e\") " pod="openshift-dns/dns-default-tbzz8" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732940 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b35e3ab6-301b-4b5d-80f6-ba8e1e301d60-cert\") pod \"ingress-canary-pt8rf\" (UID: \"b35e3ab6-301b-4b5d-80f6-ba8e1e301d60\") " pod="openshift-ingress-canary/ingress-canary-pt8rf" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732961 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c3114586-1658-4733-b9f6-7a6ebcf25b46-signing-cabundle\") pod \"service-ca-9c57cc56f-nmbhc\" (UID: \"c3114586-1658-4733-b9f6-7a6ebcf25b46\") " pod="openshift-service-ca/service-ca-9c57cc56f-nmbhc" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.732983 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25npf\" (UniqueName: \"kubernetes.io/projected/4bba5d5d-5eca-4d85-a64d-e127c8fe0e98-kube-api-access-25npf\") pod \"marketplace-operator-79b997595-gx9kk\" (UID: \"4bba5d5d-5eca-4d85-a64d-e127c8fe0e98\") " pod="openshift-marketplace/marketplace-operator-79b997595-gx9kk" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.733021 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be9bc31d-0a4b-4060-b43f-2f563b9c03b8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c5w5z\" (UID: \"be9bc31d-0a4b-4060-b43f-2f563b9c03b8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c5w5z" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.733045 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cms42\" (UniqueName: \"kubernetes.io/projected/2ac0bbe2-2764-4862-9b13-9940b8720dcf-kube-api-access-cms42\") pod \"service-ca-operator-777779d784-wgf7v\" (UID: \"2ac0bbe2-2764-4862-9b13-9940b8720dcf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wgf7v" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.733067 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ac0bbe2-2764-4862-9b13-9940b8720dcf-serving-cert\") pod \"service-ca-operator-777779d784-wgf7v\" (UID: \"2ac0bbe2-2764-4862-9b13-9940b8720dcf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wgf7v" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.733100 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ed74970-560a-4f45-84e8-ebedcaf74392-config-volume\") pod \"collect-profiles-29484615-wmzmg\" (UID: \"2ed74970-560a-4f45-84e8-ebedcaf74392\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-wmzmg" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.733124 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/48336b51-2e64-4f1e-a96f-5f866900ba2a-socket-dir\") pod \"csi-hostpathplugin-g5js4\" (UID: \"48336b51-2e64-4f1e-a96f-5f866900ba2a\") " pod="hostpath-provisioner/csi-hostpathplugin-g5js4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.733159 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.733183 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk55g\" (UniqueName: \"kubernetes.io/projected/124095a6-83e3-45da-8bed-6ed5a8f6892b-kube-api-access-rk55g\") pod \"router-default-5444994796-ldnv8\" (UID: \"124095a6-83e3-45da-8bed-6ed5a8f6892b\") " pod="openshift-ingress/router-default-5444994796-ldnv8" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.733206 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1312dce6-4901-499e-a380-fcf84c6126c4-proxy-tls\") pod \"machine-config-controller-84d6567774-2zchx\" (UID: \"1312dce6-4901-499e-a380-fcf84c6126c4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2zchx" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.733237 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0797c211-fc73-476d-9dc2-383a5a9d1dcc-webhook-cert\") pod \"packageserver-d55dfcdfc-g6sx2\" (UID: \"0797c211-fc73-476d-9dc2-383a5a9d1dcc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6sx2" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.733258 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be9bc31d-0a4b-4060-b43f-2f563b9c03b8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c5w5z\" (UID: \"be9bc31d-0a4b-4060-b43f-2f563b9c03b8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c5w5z" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.733279 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/124095a6-83e3-45da-8bed-6ed5a8f6892b-stats-auth\") pod \"router-default-5444994796-ldnv8\" (UID: \"124095a6-83e3-45da-8bed-6ed5a8f6892b\") " pod="openshift-ingress/router-default-5444994796-ldnv8" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.733299 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/57c9b12f-5d5e-4c8c-95fd-b9f592d9c4f8-srv-cert\") pod \"catalog-operator-68c6474976-r25qn\" (UID: \"57c9b12f-5d5e-4c8c-95fd-b9f592d9c4f8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r25qn" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.733321 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3d458e72-0fea-4998-9f96-b2c4c5427f39-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xkgxq\" (UID: \"3d458e72-0fea-4998-9f96-b2c4c5427f39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xkgxq" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.734324 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8667d5ed-56e8-42ad-86ad-63aa962e7c96-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ll2c4\" (UID: \"8667d5ed-56e8-42ad-86ad-63aa962e7c96\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ll2c4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.735144 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0797c211-fc73-476d-9dc2-383a5a9d1dcc-tmpfs\") pod \"packageserver-d55dfcdfc-g6sx2\" (UID: \"0797c211-fc73-476d-9dc2-383a5a9d1dcc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6sx2" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.735884 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ac0bbe2-2764-4862-9b13-9940b8720dcf-config\") pod \"service-ca-operator-777779d784-wgf7v\" (UID: \"2ac0bbe2-2764-4862-9b13-9940b8720dcf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wgf7v" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.736207 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ed74970-560a-4f45-84e8-ebedcaf74392-config-volume\") pod \"collect-profiles-29484615-wmzmg\" (UID: \"2ed74970-560a-4f45-84e8-ebedcaf74392\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-wmzmg" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.736266 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1312dce6-4901-499e-a380-fcf84c6126c4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2zchx\" (UID: \"1312dce6-4901-499e-a380-fcf84c6126c4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2zchx" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.736439 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8667d5ed-56e8-42ad-86ad-63aa962e7c96-images\") pod \"machine-config-operator-74547568cd-ll2c4\" (UID: \"8667d5ed-56e8-42ad-86ad-63aa962e7c96\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ll2c4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.736655 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3d458e72-0fea-4998-9f96-b2c4c5427f39-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xkgxq\" (UID: \"3d458e72-0fea-4998-9f96-b2c4c5427f39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xkgxq" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.736672 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/48336b51-2e64-4f1e-a96f-5f866900ba2a-socket-dir\") pod \"csi-hostpathplugin-g5js4\" (UID: \"48336b51-2e64-4f1e-a96f-5f866900ba2a\") " pod="hostpath-provisioner/csi-hostpathplugin-g5js4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.737076 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/48336b51-2e64-4f1e-a96f-5f866900ba2a-mountpoint-dir\") pod \"csi-hostpathplugin-g5js4\" (UID: \"48336b51-2e64-4f1e-a96f-5f866900ba2a\") " pod="hostpath-provisioner/csi-hostpathplugin-g5js4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.737274 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/48336b51-2e64-4f1e-a96f-5f866900ba2a-registration-dir\") pod \"csi-hostpathplugin-g5js4\" (UID: \"48336b51-2e64-4f1e-a96f-5f866900ba2a\") " pod="hostpath-provisioner/csi-hostpathplugin-g5js4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.737463 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f29c0509-af02-4b97-981a-cc0f24848953-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hrggg\" (UID: \"f29c0509-af02-4b97-981a-cc0f24848953\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hrggg" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.737509 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/48336b51-2e64-4f1e-a96f-5f866900ba2a-plugins-dir\") pod \"csi-hostpathplugin-g5js4\" (UID: \"48336b51-2e64-4f1e-a96f-5f866900ba2a\") " pod="hostpath-provisioner/csi-hostpathplugin-g5js4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.738114 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c3114586-1658-4733-b9f6-7a6ebcf25b46-signing-cabundle\") pod \"service-ca-9c57cc56f-nmbhc\" (UID: \"c3114586-1658-4733-b9f6-7a6ebcf25b46\") " pod="openshift-service-ca/service-ca-9c57cc56f-nmbhc" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.738466 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/124095a6-83e3-45da-8bed-6ed5a8f6892b-service-ca-bundle\") pod \"router-default-5444994796-ldnv8\" (UID: \"124095a6-83e3-45da-8bed-6ed5a8f6892b\") " pod="openshift-ingress/router-default-5444994796-ldnv8" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.738819 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/48336b51-2e64-4f1e-a96f-5f866900ba2a-csi-data-dir\") pod \"csi-hostpathplugin-g5js4\" (UID: \"48336b51-2e64-4f1e-a96f-5f866900ba2a\") " pod="hostpath-provisioner/csi-hostpathplugin-g5js4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.738896 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be9bc31d-0a4b-4060-b43f-2f563b9c03b8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c5w5z\" (UID: \"be9bc31d-0a4b-4060-b43f-2f563b9c03b8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c5w5z" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.739100 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb8be535-82d3-4a30-b6aa-45058a58f30e-config-volume\") pod \"dns-default-tbzz8\" (UID: \"cb8be535-82d3-4a30-b6aa-45058a58f30e\") " pod="openshift-dns/dns-default-tbzz8" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.741720 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4bba5d5d-5eca-4d85-a64d-e127c8fe0e98-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gx9kk\" (UID: \"4bba5d5d-5eca-4d85-a64d-e127c8fe0e98\") " pod="openshift-marketplace/marketplace-operator-79b997595-gx9kk" Jan 22 10:26:51 crc kubenswrapper[4752]: E0122 10:26:51.742480 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:52.242455068 +0000 UTC m=+91.472397976 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.743509 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be9bc31d-0a4b-4060-b43f-2f563b9c03b8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c5w5z\" (UID: \"be9bc31d-0a4b-4060-b43f-2f563b9c03b8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c5w5z" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.744008 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4140f15a-5e23-431b-ad69-a64d54325d19-bound-sa-token\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.747096 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5e9670cb-efb5-48a4-b621-b9da02d8afef-certs\") pod \"machine-config-server-wm2bq\" (UID: \"5e9670cb-efb5-48a4-b621-b9da02d8afef\") " pod="openshift-machine-config-operator/machine-config-server-wm2bq" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.747605 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c3114586-1658-4733-b9f6-7a6ebcf25b46-signing-key\") pod \"service-ca-9c57cc56f-nmbhc\" (UID: \"c3114586-1658-4733-b9f6-7a6ebcf25b46\") " pod="openshift-service-ca/service-ca-9c57cc56f-nmbhc" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.748150 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/57c9b12f-5d5e-4c8c-95fd-b9f592d9c4f8-srv-cert\") pod \"catalog-operator-68c6474976-r25qn\" (UID: \"57c9b12f-5d5e-4c8c-95fd-b9f592d9c4f8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r25qn" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.748206 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5e9670cb-efb5-48a4-b621-b9da02d8afef-node-bootstrap-token\") pod \"machine-config-server-wm2bq\" (UID: \"5e9670cb-efb5-48a4-b621-b9da02d8afef\") " pod="openshift-machine-config-operator/machine-config-server-wm2bq" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.755101 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d458e72-0fea-4998-9f96-b2c4c5427f39-srv-cert\") pod \"olm-operator-6b444d44fb-xkgxq\" (UID: \"3d458e72-0fea-4998-9f96-b2c4c5427f39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xkgxq" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.764506 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/73d4e76d-7187-4ef1-a6c7-f5d8fcbf6dce-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mhb4k\" (UID: \"73d4e76d-7187-4ef1-a6c7-f5d8fcbf6dce\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mhb4k" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.765213 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1312dce6-4901-499e-a380-fcf84c6126c4-proxy-tls\") pod \"machine-config-controller-84d6567774-2zchx\" (UID: \"1312dce6-4901-499e-a380-fcf84c6126c4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2zchx" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.765525 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4bba5d5d-5eca-4d85-a64d-e127c8fe0e98-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gx9kk\" (UID: \"4bba5d5d-5eca-4d85-a64d-e127c8fe0e98\") " pod="openshift-marketplace/marketplace-operator-79b997595-gx9kk" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.765694 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/57c9b12f-5d5e-4c8c-95fd-b9f592d9c4f8-profile-collector-cert\") pod \"catalog-operator-68c6474976-r25qn\" (UID: \"57c9b12f-5d5e-4c8c-95fd-b9f592d9c4f8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r25qn" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.765889 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.764648 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0797c211-fc73-476d-9dc2-383a5a9d1dcc-apiservice-cert\") pod \"packageserver-d55dfcdfc-g6sx2\" (UID: \"0797c211-fc73-476d-9dc2-383a5a9d1dcc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6sx2" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.766257 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/124095a6-83e3-45da-8bed-6ed5a8f6892b-default-certificate\") pod \"router-default-5444994796-ldnv8\" (UID: \"124095a6-83e3-45da-8bed-6ed5a8f6892b\") " pod="openshift-ingress/router-default-5444994796-ldnv8" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.766683 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ac0bbe2-2764-4862-9b13-9940b8720dcf-serving-cert\") pod \"service-ca-operator-777779d784-wgf7v\" (UID: \"2ac0bbe2-2764-4862-9b13-9940b8720dcf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wgf7v" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.767193 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2ed74970-560a-4f45-84e8-ebedcaf74392-secret-volume\") pod \"collect-profiles-29484615-wmzmg\" (UID: \"2ed74970-560a-4f45-84e8-ebedcaf74392\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-wmzmg" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.767626 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/124095a6-83e3-45da-8bed-6ed5a8f6892b-metrics-certs\") pod \"router-default-5444994796-ldnv8\" (UID: \"124095a6-83e3-45da-8bed-6ed5a8f6892b\") " pod="openshift-ingress/router-default-5444994796-ldnv8" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.768175 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xrkp\" (UniqueName: \"kubernetes.io/projected/bee93ff7-fb56-4cb1-846c-790c91498c6b-kube-api-access-8xrkp\") pod \"authentication-operator-69f744f599-kpx2c\" (UID: \"bee93ff7-fb56-4cb1-846c-790c91498c6b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kpx2c" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.768444 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b35e3ab6-301b-4b5d-80f6-ba8e1e301d60-cert\") pod \"ingress-canary-pt8rf\" (UID: \"b35e3ab6-301b-4b5d-80f6-ba8e1e301d60\") " pod="openshift-ingress-canary/ingress-canary-pt8rf" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.769575 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8667d5ed-56e8-42ad-86ad-63aa962e7c96-proxy-tls\") pod \"machine-config-operator-74547568cd-ll2c4\" (UID: \"8667d5ed-56e8-42ad-86ad-63aa962e7c96\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ll2c4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.771295 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/124095a6-83e3-45da-8bed-6ed5a8f6892b-stats-auth\") pod \"router-default-5444994796-ldnv8\" (UID: \"124095a6-83e3-45da-8bed-6ed5a8f6892b\") " pod="openshift-ingress/router-default-5444994796-ldnv8" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.775311 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cb8be535-82d3-4a30-b6aa-45058a58f30e-metrics-tls\") pod \"dns-default-tbzz8\" (UID: \"cb8be535-82d3-4a30-b6aa-45058a58f30e\") " pod="openshift-dns/dns-default-tbzz8" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.779746 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0797c211-fc73-476d-9dc2-383a5a9d1dcc-webhook-cert\") pod \"packageserver-d55dfcdfc-g6sx2\" (UID: \"0797c211-fc73-476d-9dc2-383a5a9d1dcc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6sx2" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.791686 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64s5r\" (UniqueName: \"kubernetes.io/projected/122ebfd5-3b50-40a3-929c-0751226c5253-kube-api-access-64s5r\") pod \"dns-operator-744455d44c-pjv6m\" (UID: \"122ebfd5-3b50-40a3-929c-0751226c5253\") " pod="openshift-dns-operator/dns-operator-744455d44c-pjv6m" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.794636 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66tcl\" (UniqueName: \"kubernetes.io/projected/9b3c578d-cefd-4be8-98dc-a646ebc2a3df-kube-api-access-66tcl\") pod \"machine-approver-56656f9798-5sd4x\" (UID: \"9b3c578d-cefd-4be8-98dc-a646ebc2a3df\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5sd4x" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.808813 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rcr55" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.813791 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g9vn\" (UniqueName: \"kubernetes.io/projected/f1aaae5a-1812-407f-bc93-79edf6ef6476-kube-api-access-7g9vn\") pod \"ingress-operator-5b745b69d9-qv9jt\" (UID: \"f1aaae5a-1812-407f-bc93-79edf6ef6476\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qv9jt" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.834261 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:51 crc kubenswrapper[4752]: E0122 10:26:51.835733 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:52.335712502 +0000 UTC m=+91.565655410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.836017 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k5jc\" (UniqueName: \"kubernetes.io/projected/4140f15a-5e23-431b-ad69-a64d54325d19-kube-api-access-9k5jc\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.837023 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5f5qf" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.865709 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbqlk\" (UniqueName: \"kubernetes.io/projected/1e312683-699a-4ea1-9914-d9dc8b237cb4-kube-api-access-sbqlk\") pod \"cluster-samples-operator-665b6dd947-5t8fw\" (UID: \"1e312683-699a-4ea1-9914-d9dc8b237cb4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5t8fw" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.876218 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr2pt\" (UniqueName: \"kubernetes.io/projected/d0c57c83-4f36-4531-8f22-e3e37b49d843-kube-api-access-gr2pt\") pod \"kube-storage-version-migrator-operator-b67b599dd-q88wn\" (UID: \"d0c57c83-4f36-4531-8f22-e3e37b49d843\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q88wn" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.911713 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w848\" (UniqueName: \"kubernetes.io/projected/ca13be73-3b56-4d7d-aaf4-547e7fbcec5f-kube-api-access-5w848\") pod \"etcd-operator-b45778765-pnz94\" (UID: \"ca13be73-3b56-4d7d-aaf4-547e7fbcec5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pnz94" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.921041 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1aaae5a-1812-407f-bc93-79edf6ef6476-bound-sa-token\") pod \"ingress-operator-5b745b69d9-qv9jt\" (UID: \"f1aaae5a-1812-407f-bc93-79edf6ef6476\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qv9jt" Jan 22 10:26:51 crc kubenswrapper[4752]: E0122 10:26:51.937354 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:52.437338935 +0000 UTC m=+91.667281843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.937845 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.945388 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57zw5\" (UniqueName: \"kubernetes.io/projected/bd7570aa-f486-408b-b0c6-83e0903fa3e8-kube-api-access-57zw5\") pod \"cluster-image-registry-operator-dc59b4c8b-v6tj4\" (UID: \"bd7570aa-f486-408b-b0c6-83e0903fa3e8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v6tj4" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.960704 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwd2d\" (UniqueName: \"kubernetes.io/projected/a94e46fc-5978-4d73-b811-463257b90e7c-kube-api-access-wwd2d\") pod \"openshift-controller-manager-operator-756b6f6bc6-2k8mm\" (UID: \"a94e46fc-5978-4d73-b811-463257b90e7c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2k8mm" Jan 22 10:26:51 crc kubenswrapper[4752]: I0122 10:26:51.996472 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwjtq\" (UniqueName: \"kubernetes.io/projected/3d458e72-0fea-4998-9f96-b2c4c5427f39-kube-api-access-fwjtq\") pod \"olm-operator-6b444d44fb-xkgxq\" (UID: \"3d458e72-0fea-4998-9f96-b2c4c5427f39\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xkgxq" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.021817 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrwn9\" (UniqueName: \"kubernetes.io/projected/73d4e76d-7187-4ef1-a6c7-f5d8fcbf6dce-kube-api-access-jrwn9\") pod \"multus-admission-controller-857f4d67dd-mhb4k\" (UID: \"73d4e76d-7187-4ef1-a6c7-f5d8fcbf6dce\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mhb4k" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.038552 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:52 crc kubenswrapper[4752]: E0122 10:26:52.039265 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:52.539246536 +0000 UTC m=+91.769189444 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.040737 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdmrl\" (UniqueName: \"kubernetes.io/projected/57c9b12f-5d5e-4c8c-95fd-b9f592d9c4f8-kube-api-access-tdmrl\") pod \"catalog-operator-68c6474976-r25qn\" (UID: \"57c9b12f-5d5e-4c8c-95fd-b9f592d9c4f8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r25qn" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.041823 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-pjv6m" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.049121 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-jh6kp"] Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.052158 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-kpx2c" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.060660 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v6tj4" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.061984 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfq6s\" (UniqueName: \"kubernetes.io/projected/c3114586-1658-4733-b9f6-7a6ebcf25b46-kube-api-access-wfq6s\") pod \"service-ca-9c57cc56f-nmbhc\" (UID: \"c3114586-1658-4733-b9f6-7a6ebcf25b46\") " pod="openshift-service-ca/service-ca-9c57cc56f-nmbhc" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.071177 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5f5qf"] Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.079845 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2k8mm" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.084283 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2hm5\" (UniqueName: \"kubernetes.io/projected/2ed74970-560a-4f45-84e8-ebedcaf74392-kube-api-access-t2hm5\") pod \"collect-profiles-29484615-wmzmg\" (UID: \"2ed74970-560a-4f45-84e8-ebedcaf74392\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-wmzmg" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.085377 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-pnz94" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.091511 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5sd4x" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.098166 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw98d\" (UniqueName: \"kubernetes.io/projected/cb8be535-82d3-4a30-b6aa-45058a58f30e-kube-api-access-mw98d\") pod \"dns-default-tbzz8\" (UID: \"cb8be535-82d3-4a30-b6aa-45058a58f30e\") " pod="openshift-dns/dns-default-tbzz8" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.098394 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qv9jt" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.121741 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rg7t\" (UniqueName: \"kubernetes.io/projected/5e9670cb-efb5-48a4-b621-b9da02d8afef-kube-api-access-7rg7t\") pod \"machine-config-server-wm2bq\" (UID: \"5e9670cb-efb5-48a4-b621-b9da02d8afef\") " pod="openshift-machine-config-operator/machine-config-server-wm2bq" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.128869 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5t8fw" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.140139 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:52 crc kubenswrapper[4752]: E0122 10:26:52.140616 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:52.640593731 +0000 UTC m=+91.870536639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.141271 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tgkf\" (UniqueName: \"kubernetes.io/projected/1312dce6-4901-499e-a380-fcf84c6126c4-kube-api-access-2tgkf\") pod \"machine-config-controller-84d6567774-2zchx\" (UID: \"1312dce6-4901-499e-a380-fcf84c6126c4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2zchx" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.145139 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q88wn" Jan 22 10:26:52 crc kubenswrapper[4752]: W0122 10:26:52.147291 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeab70863_0bea_4fb3_9265_045d0e2dff04.slice/crio-d7d4ea5eb5e74cda20514beb457d4ee9cdb121c75603ca2ddcb13a87b4ca1f91 WatchSource:0}: Error finding container d7d4ea5eb5e74cda20514beb457d4ee9cdb121c75603ca2ddcb13a87b4ca1f91: Status 404 returned error can't find the container with id d7d4ea5eb5e74cda20514beb457d4ee9cdb121c75603ca2ddcb13a87b4ca1f91 Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.152523 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rcr55"] Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.156922 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25npf\" (UniqueName: \"kubernetes.io/projected/4bba5d5d-5eca-4d85-a64d-e127c8fe0e98-kube-api-access-25npf\") pod \"marketplace-operator-79b997595-gx9kk\" (UID: \"4bba5d5d-5eca-4d85-a64d-e127c8fe0e98\") " pod="openshift-marketplace/marketplace-operator-79b997595-gx9kk" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.173871 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be9bc31d-0a4b-4060-b43f-2f563b9c03b8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c5w5z\" (UID: \"be9bc31d-0a4b-4060-b43f-2f563b9c03b8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c5w5z" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.174976 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-mhb4k" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.192711 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69dhq\" (UniqueName: \"kubernetes.io/projected/f29c0509-af02-4b97-981a-cc0f24848953-kube-api-access-69dhq\") pod \"package-server-manager-789f6589d5-hrggg\" (UID: \"f29c0509-af02-4b97-981a-cc0f24848953\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hrggg" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.194465 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2zchx" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.208754 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r25qn" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.212825 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5l26\" (UniqueName: \"kubernetes.io/projected/48336b51-2e64-4f1e-a96f-5f866900ba2a-kube-api-access-d5l26\") pod \"csi-hostpathplugin-g5js4\" (UID: \"48336b51-2e64-4f1e-a96f-5f866900ba2a\") " pod="hostpath-provisioner/csi-hostpathplugin-g5js4" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.221598 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-nmbhc" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.230754 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gx9kk" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.237949 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xkgxq" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.245478 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:52 crc kubenswrapper[4752]: E0122 10:26:52.245958 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:52.745941331 +0000 UTC m=+91.975884229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.248306 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wzpv\" (UniqueName: \"kubernetes.io/projected/8667d5ed-56e8-42ad-86ad-63aa962e7c96-kube-api-access-5wzpv\") pod \"machine-config-operator-74547568cd-ll2c4\" (UID: \"8667d5ed-56e8-42ad-86ad-63aa962e7c96\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ll2c4" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.252088 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wm2bq" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.265536 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jrtp\" (UniqueName: \"kubernetes.io/projected/f6c2b2d2-d728-4416-aebd-a4ce23716f41-kube-api-access-2jrtp\") pod \"migrator-59844c95c7-djs29\" (UID: \"f6c2b2d2-d728-4416-aebd-a4ce23716f41\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-djs29" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.276222 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-wmzmg" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.305524 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z2kb\" (UniqueName: \"kubernetes.io/projected/b35e3ab6-301b-4b5d-80f6-ba8e1e301d60-kube-api-access-6z2kb\") pod \"ingress-canary-pt8rf\" (UID: \"b35e3ab6-301b-4b5d-80f6-ba8e1e301d60\") " pod="openshift-ingress-canary/ingress-canary-pt8rf" Jan 22 10:26:52 crc kubenswrapper[4752]: W0122 10:26:52.308420 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b3563ce_e872_4f8d_b605_a6962b979d53.slice/crio-bee468a90adec73cc6e2eb40d846e816a9fcc70e596453994501ca4c95e25db7 WatchSource:0}: Error finding container bee468a90adec73cc6e2eb40d846e816a9fcc70e596453994501ca4c95e25db7: Status 404 returned error can't find the container with id bee468a90adec73cc6e2eb40d846e816a9fcc70e596453994501ca4c95e25db7 Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.312829 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-g5js4" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.316190 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-tbzz8" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.321893 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cms42\" (UniqueName: \"kubernetes.io/projected/2ac0bbe2-2764-4862-9b13-9940b8720dcf-kube-api-access-cms42\") pod \"service-ca-operator-777779d784-wgf7v\" (UID: \"2ac0bbe2-2764-4862-9b13-9940b8720dcf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wgf7v" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.328539 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnlzx\" (UniqueName: \"kubernetes.io/projected/0797c211-fc73-476d-9dc2-383a5a9d1dcc-kube-api-access-mnlzx\") pod \"packageserver-d55dfcdfc-g6sx2\" (UID: \"0797c211-fc73-476d-9dc2-383a5a9d1dcc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6sx2" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.341664 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk55g\" (UniqueName: \"kubernetes.io/projected/124095a6-83e3-45da-8bed-6ed5a8f6892b-kube-api-access-rk55g\") pod \"router-default-5444994796-ldnv8\" (UID: \"124095a6-83e3-45da-8bed-6ed5a8f6892b\") " pod="openshift-ingress/router-default-5444994796-ldnv8" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.347015 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:52 crc kubenswrapper[4752]: E0122 10:26:52.347388 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:52.847373639 +0000 UTC m=+92.077316547 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.447968 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:52 crc kubenswrapper[4752]: E0122 10:26:52.448352 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:52.948337445 +0000 UTC m=+92.178280353 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.452708 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-ldnv8" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.459774 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-djs29" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.466983 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c5w5z" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.480837 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ll2c4" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.488110 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hrggg" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.502580 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6sx2" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.549682 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:52 crc kubenswrapper[4752]: E0122 10:26:52.549994 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:53.049979648 +0000 UTC m=+92.279922556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.560293 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wgf7v" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.571684 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pt8rf" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.589233 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pjv6m"] Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.650396 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:52 crc kubenswrapper[4752]: E0122 10:26:52.650753 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:53.150736488 +0000 UTC m=+92.380679396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.689866 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kpx2c"] Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.748236 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5sd4x" event={"ID":"9b3c578d-cefd-4be8-98dc-a646ebc2a3df","Type":"ContainerStarted","Data":"37b5d354436152248924bdaf48dfffb8626b32c88dc146c8bcb3e986fd54f0af"} Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.751214 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rcr55" event={"ID":"0b3563ce-e872-4f8d-b605-a6962b979d53","Type":"ContainerStarted","Data":"bee468a90adec73cc6e2eb40d846e816a9fcc70e596453994501ca4c95e25db7"} Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.752294 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:52 crc kubenswrapper[4752]: E0122 10:26:52.752901 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:53.252888585 +0000 UTC m=+92.482831493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.774378 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4t27t" event={"ID":"622b1b03-6c1d-460c-ac51-10046c682195","Type":"ContainerStarted","Data":"1f71eaf584091cc60b8739e02e52207db199e9f117c7b60747008392f796ed1f"} Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.779404 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v6tj4"] Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.781686 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4t27t" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.819253 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" event={"ID":"c82bf83f-1d94-41a2-ad18-2264806dd9ff","Type":"ContainerStarted","Data":"20505c59d4fe8a47321f95a2c582d933e2f1a70e84fa82dc99692c5e256a5483"} Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.820016 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.829921 4752 generic.go:334] "Generic (PLEG): container finished" podID="619941ce-9ead-4926-b8c0-f9108cd58462" containerID="d3e1b50de766dc378a0c2a8b2b84a775fca4e0c8487420dfa509750c906d22e7" exitCode=0 Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.829983 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf" event={"ID":"619941ce-9ead-4926-b8c0-f9108cd58462","Type":"ContainerDied","Data":"d3e1b50de766dc378a0c2a8b2b84a775fca4e0c8487420dfa509750c906d22e7"} Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.831124 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5f5qf" event={"ID":"eab70863-0bea-4fb3-9265-045d0e2dff04","Type":"ContainerStarted","Data":"d7d4ea5eb5e74cda20514beb457d4ee9cdb121c75603ca2ddcb13a87b4ca1f91"} Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.847423 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jh6kp" event={"ID":"b82cc492-857e-4eaf-8e18-87e830bdc9f6","Type":"ContainerStarted","Data":"05de6e20d504002b041892e8bf6c3732230d7b77fe46f3def808ed43dda88b69"} Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.855672 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:52 crc kubenswrapper[4752]: E0122 10:26:52.856907 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:53.35689176 +0000 UTC m=+92.586834668 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.884179 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" event={"ID":"315cb527-e73b-4e1f-bc55-09c5c694cef9","Type":"ContainerStarted","Data":"41290716765dfb952baab8fbeef9f9f44c9f562adec9c514798a1c4d10a1af75"} Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.904215 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wm2bq" event={"ID":"5e9670cb-efb5-48a4-b621-b9da02d8afef","Type":"ContainerStarted","Data":"b645161ce5272366cd4183acaf9e1e60b0c1a5b11df77e83f5a2984eb0a81681"} Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.931786 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bmh84" event={"ID":"5e213e66-9429-41a1-9b53-476794092c7f","Type":"ContainerStarted","Data":"336c10f8bbda3e6f0af5542a67c5f7ee0daba3bb004944f31b689832a7fe4ff5"} Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.932414 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-bmh84" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.933799 4752 patch_prober.go:28] interesting pod/downloads-7954f5f757-bmh84 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.933836 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bmh84" podUID="5e213e66-9429-41a1-9b53-476794092c7f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.952984 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zq8bp" event={"ID":"0d174189-03c1-40c5-9304-44f925f565c7","Type":"ContainerStarted","Data":"213e11e667bdcb0cadf43bdf9ec6c42801db3a31566cae2f046ad488fc0b6804"} Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.960609 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:52 crc kubenswrapper[4752]: E0122 10:26:52.964262 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:53.464241572 +0000 UTC m=+92.694184540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:52 crc kubenswrapper[4752]: I0122 10:26:52.985201 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-wnsq8" Jan 22 10:26:53 crc kubenswrapper[4752]: I0122 10:26:53.071198 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:53 crc kubenswrapper[4752]: E0122 10:26:53.073165 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:53.573143285 +0000 UTC m=+92.803086193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:53 crc kubenswrapper[4752]: I0122 10:26:53.180938 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:53 crc kubenswrapper[4752]: E0122 10:26:53.187221 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:53.687199972 +0000 UTC m=+92.917142880 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:53 crc kubenswrapper[4752]: I0122 10:26:53.284638 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:53 crc kubenswrapper[4752]: E0122 10:26:53.285013 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:53.784997924 +0000 UTC m=+93.014940832 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:53 crc kubenswrapper[4752]: I0122 10:26:53.293385 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4t27t" Jan 22 10:26:53 crc kubenswrapper[4752]: I0122 10:26:53.386393 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:53 crc kubenswrapper[4752]: E0122 10:26:53.386753 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:53.886742351 +0000 UTC m=+93.116685259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:53 crc kubenswrapper[4752]: I0122 10:26:53.471593 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-g476l" podStartSLOduration=67.471572795 podStartE2EDuration="1m7.471572795s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:53.456023839 +0000 UTC m=+92.685966747" watchObservedRunningTime="2026-01-22 10:26:53.471572795 +0000 UTC m=+92.701515703" Jan 22 10:26:53 crc kubenswrapper[4752]: I0122 10:26:53.487746 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:53 crc kubenswrapper[4752]: E0122 10:26:53.487938 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:53.987913172 +0000 UTC m=+93.217856080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:53 crc kubenswrapper[4752]: I0122 10:26:53.488124 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:53 crc kubenswrapper[4752]: E0122 10:26:53.488389 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:53.988381274 +0000 UTC m=+93.218324182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:53 crc kubenswrapper[4752]: I0122 10:26:53.589202 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:53 crc kubenswrapper[4752]: E0122 10:26:53.589761 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:54.08974645 +0000 UTC m=+93.319689358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:53 crc kubenswrapper[4752]: I0122 10:26:53.695067 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:53 crc kubenswrapper[4752]: E0122 10:26:53.695363 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:54.195348997 +0000 UTC m=+93.425291905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:53 crc kubenswrapper[4752]: I0122 10:26:53.712777 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:26:53 crc kubenswrapper[4752]: I0122 10:26:53.796636 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:53 crc kubenswrapper[4752]: E0122 10:26:53.797065 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:54.297048622 +0000 UTC m=+93.526991530 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:53 crc kubenswrapper[4752]: I0122 10:26:53.904253 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:53 crc kubenswrapper[4752]: E0122 10:26:53.905009 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:54.40499655 +0000 UTC m=+93.634939458 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.006721 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:54 crc kubenswrapper[4752]: E0122 10:26:54.007147 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:54.507133066 +0000 UTC m=+93.737075974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.022055 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" event={"ID":"315cb527-e73b-4e1f-bc55-09c5c694cef9","Type":"ContainerStarted","Data":"54fadef4ab97f710838cd0db894758cf90742065f556f37db57b77889188d5f0"} Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.037828 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" podStartSLOduration=68.037810927 podStartE2EDuration="1m8.037810927s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:54.03755241 +0000 UTC m=+93.267495328" watchObservedRunningTime="2026-01-22 10:26:54.037810927 +0000 UTC m=+93.267753835" Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.058715 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wm2bq" event={"ID":"5e9670cb-efb5-48a4-b621-b9da02d8afef","Type":"ContainerStarted","Data":"eb2cad3b625e10ba5ce022aa3bdfaa3c88dc4c40654c9252656584c763579388"} Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.072654 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-qv9jt"] Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.086263 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v6tj4" event={"ID":"bd7570aa-f486-408b-b0c6-83e0903fa3e8","Type":"ContainerStarted","Data":"1b96c526d9a1a7d4dc5c13d1dd658eacada9140281642edc3bb88348940f9128"} Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.087197 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5f5qf" event={"ID":"eab70863-0bea-4fb3-9265-045d0e2dff04","Type":"ContainerStarted","Data":"0803a43cfa4abe933adf41a89273e32c2c6dd76f9a60fa7c6e7cb0662d2eec76"} Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.088089 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jh6kp" event={"ID":"b82cc492-857e-4eaf-8e18-87e830bdc9f6","Type":"ContainerStarted","Data":"43d45d54eaf1451800296d1859794b3a5a84fb2b4bf6984616e2abbed01b0a31"} Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.090038 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-kpx2c" event={"ID":"bee93ff7-fb56-4cb1-846c-790c91498c6b","Type":"ContainerStarted","Data":"5ecbcb7267bbf255a8dc971f44aa879fb43b57ecc389fa77f274743708146b0e"} Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.096076 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lp2rk" podStartSLOduration=68.096058477 podStartE2EDuration="1m8.096058477s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:54.083215072 +0000 UTC m=+93.313157980" watchObservedRunningTime="2026-01-22 10:26:54.096058477 +0000 UTC m=+93.326001405" Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.096368 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2k8mm"] Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.105176 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-bmh84" podStartSLOduration=68.105162365 podStartE2EDuration="1m8.105162365s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:54.10381833 +0000 UTC m=+93.333761238" watchObservedRunningTime="2026-01-22 10:26:54.105162365 +0000 UTC m=+93.335105273" Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.107912 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:54 crc kubenswrapper[4752]: E0122 10:26:54.109201 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:54.60919085 +0000 UTC m=+93.839133758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.124665 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-ldnv8" event={"ID":"124095a6-83e3-45da-8bed-6ed5a8f6892b","Type":"ContainerStarted","Data":"6e4c530234f8ee23e1a31da1fc292270d796e4568d3d4b22d732f8412a8392e9"} Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.124707 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-ldnv8" event={"ID":"124095a6-83e3-45da-8bed-6ed5a8f6892b","Type":"ContainerStarted","Data":"a0a8ac5f3b766e64232c826ca4d0dbf1c0ab8b64a0596dd7941f933606551155"} Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.126075 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psf87" podStartSLOduration=68.12606553 podStartE2EDuration="1m8.12606553s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:54.124415457 +0000 UTC m=+93.354358365" watchObservedRunningTime="2026-01-22 10:26:54.12606553 +0000 UTC m=+93.356008438" Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.173185 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pjv6m" event={"ID":"122ebfd5-3b50-40a3-929c-0751226c5253","Type":"ContainerStarted","Data":"1f0177f4891ca3179755a8586bc9e7f850bb8dec94d3e5450adfdc5ca6a4eb4e"} Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.182574 4752 patch_prober.go:28] interesting pod/downloads-7954f5f757-bmh84 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.182638 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bmh84" podUID="5e213e66-9429-41a1-9b53-476794092c7f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.187758 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4t27t" podStartSLOduration=68.18771692 podStartE2EDuration="1m8.18771692s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:54.178608562 +0000 UTC m=+93.408551470" watchObservedRunningTime="2026-01-22 10:26:54.18771692 +0000 UTC m=+93.417659828" Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.214656 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:54 crc kubenswrapper[4752]: E0122 10:26:54.216179 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:54.716162902 +0000 UTC m=+93.946105810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.282453 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-jh6kp" podStartSLOduration=68.282436292 podStartE2EDuration="1m8.282436292s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:54.282183346 +0000 UTC m=+93.512126254" watchObservedRunningTime="2026-01-22 10:26:54.282436292 +0000 UTC m=+93.512379200" Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.318102 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:54 crc kubenswrapper[4752]: E0122 10:26:54.318515 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:54.818497864 +0000 UTC m=+94.048440772 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.383397 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" podStartSLOduration=68.383379878 podStartE2EDuration="1m8.383379878s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:54.382537246 +0000 UTC m=+93.612480154" watchObservedRunningTime="2026-01-22 10:26:54.383379878 +0000 UTC m=+93.613322786" Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.383590 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zq8bp" podStartSLOduration=68.383586013 podStartE2EDuration="1m8.383586013s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:54.347782568 +0000 UTC m=+93.577725486" watchObservedRunningTime="2026-01-22 10:26:54.383586013 +0000 UTC m=+93.613528921" Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.419217 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:54 crc kubenswrapper[4752]: E0122 10:26:54.419845 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:54.919814249 +0000 UTC m=+94.149757167 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.455481 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-ldnv8" Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.456032 4752 patch_prober.go:28] interesting pod/router-default-5444994796-ldnv8 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.456069 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ldnv8" podUID="124095a6-83e3-45da-8bed-6ed5a8f6892b" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.472135 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-wnsq8" podStartSLOduration=68.472116634 podStartE2EDuration="1m8.472116634s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:54.471138379 +0000 UTC m=+93.701081277" watchObservedRunningTime="2026-01-22 10:26:54.472116634 +0000 UTC m=+93.702059542" Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.474499 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-nmbhc"] Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.508537 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2ct5b" podStartSLOduration=68.508515864 podStartE2EDuration="1m8.508515864s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:54.502312142 +0000 UTC m=+93.732255050" watchObservedRunningTime="2026-01-22 10:26:54.508515864 +0000 UTC m=+93.738458772" Jan 22 10:26:54 crc kubenswrapper[4752]: W0122 10:26:54.517969 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3114586_1658_4733_b9f6_7a6ebcf25b46.slice/crio-13f6864728fcbee86a834cde0f09e3b83b29d7190955900e7cdd3abc15b42d97 WatchSource:0}: Error finding container 13f6864728fcbee86a834cde0f09e3b83b29d7190955900e7cdd3abc15b42d97: Status 404 returned error can't find the container with id 13f6864728fcbee86a834cde0f09e3b83b29d7190955900e7cdd3abc15b42d97 Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.521590 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:54 crc kubenswrapper[4752]: E0122 10:26:54.521955 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:55.021943655 +0000 UTC m=+94.251886563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.544278 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rcr55" podStartSLOduration=68.544258827 podStartE2EDuration="1m8.544258827s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:54.541485975 +0000 UTC m=+93.771428883" watchObservedRunningTime="2026-01-22 10:26:54.544258827 +0000 UTC m=+93.774201735" Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.548396 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r25qn"] Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.597282 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-wm2bq" podStartSLOduration=5.597263231 podStartE2EDuration="5.597263231s" podCreationTimestamp="2026-01-22 10:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:54.592322462 +0000 UTC m=+93.822265380" watchObservedRunningTime="2026-01-22 10:26:54.597263231 +0000 UTC m=+93.827206139" Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.614879 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q88wn"] Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.629199 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:54 crc kubenswrapper[4752]: E0122 10:26:54.629504 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:55.129488892 +0000 UTC m=+94.359431800 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.632132 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pnz94"] Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.644232 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xkgxq"] Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.644520 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484615-wmzmg"] Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.663603 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-ldnv8" podStartSLOduration=68.663585012 podStartE2EDuration="1m8.663585012s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:54.650260404 +0000 UTC m=+93.880203422" watchObservedRunningTime="2026-01-22 10:26:54.663585012 +0000 UTC m=+93.893527920" Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.664832 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mhb4k"] Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.733386 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:54 crc kubenswrapper[4752]: E0122 10:26:54.757805 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:55.257778371 +0000 UTC m=+94.487721279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.828083 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-tbzz8"] Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.843908 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-g5js4"] Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.845190 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:54 crc kubenswrapper[4752]: E0122 10:26:54.845518 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:55.345503351 +0000 UTC m=+94.575446259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.861916 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gx9kk"] Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.876969 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5t8fw"] Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.929582 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2zchx"] Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.946677 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:54 crc kubenswrapper[4752]: E0122 10:26:54.947054 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:55.447043072 +0000 UTC m=+94.676985980 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.964580 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6sx2"] Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.970917 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-pt8rf"] Jan 22 10:26:54 crc kubenswrapper[4752]: I0122 10:26:54.977473 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c5w5z"] Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.001066 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wgf7v"] Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.011045 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ll2c4"] Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.047461 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:55 crc kubenswrapper[4752]: E0122 10:26:55.047619 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:55.547591427 +0000 UTC m=+94.777534335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.047769 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:55 crc kubenswrapper[4752]: E0122 10:26:55.048110 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:55.54810276 +0000 UTC m=+94.778045668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.148566 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hrggg"] Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.148875 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:55 crc kubenswrapper[4752]: E0122 10:26:55.148999 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:55.648972213 +0000 UTC m=+94.878915121 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.149176 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:55 crc kubenswrapper[4752]: E0122 10:26:55.149434 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:55.649422745 +0000 UTC m=+94.879365653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.222719 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2k8mm" event={"ID":"a94e46fc-5978-4d73-b811-463257b90e7c","Type":"ContainerStarted","Data":"a888f5742245caa0f7bb104ff33a5c1f4d3ac968425fb72396fc78fb96ca3b59"} Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.222759 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2k8mm" event={"ID":"a94e46fc-5978-4d73-b811-463257b90e7c","Type":"ContainerStarted","Data":"c4c7c7f6dc5937031c86d2d85cfe51943d839dc205eec0d78143709a3231fded"} Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.237435 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wgf7v" event={"ID":"2ac0bbe2-2764-4862-9b13-9940b8720dcf","Type":"ContainerStarted","Data":"bb360d42220fa1ebebca33110c9213bb3119e70ef0d8cd095e1f0d639bc49474"} Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.244357 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-pnz94" event={"ID":"ca13be73-3b56-4d7d-aaf4-547e7fbcec5f","Type":"ContainerStarted","Data":"192ebe92dbd6f9ff204801268206eb316f71aa9cc03b139f9dcd113c25199748"} Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.250590 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:55 crc kubenswrapper[4752]: E0122 10:26:55.251139 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:55.7511122 +0000 UTC m=+94.981055108 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.268122 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2k8mm" podStartSLOduration=69.268098743 podStartE2EDuration="1m9.268098743s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:55.267915998 +0000 UTC m=+94.497858906" watchObservedRunningTime="2026-01-22 10:26:55.268098743 +0000 UTC m=+94.498041651" Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.273222 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5sd4x" event={"ID":"9b3c578d-cefd-4be8-98dc-a646ebc2a3df","Type":"ContainerStarted","Data":"c1cd3e47484e1ae2ebd8666c9026e5a0e80decb1f35aea3a22bb98bf73308fee"} Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.273303 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5sd4x" event={"ID":"9b3c578d-cefd-4be8-98dc-a646ebc2a3df","Type":"ContainerStarted","Data":"7b2c38eeab190f8b64d979ac8af0dae360b0cb5fd456631ddd862b12f408833f"} Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.299321 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pjv6m" event={"ID":"122ebfd5-3b50-40a3-929c-0751226c5253","Type":"ContainerStarted","Data":"db714d513942f8f707d168722a94d92afc1accfc0e256e899cd9fba8209e41ad"} Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.299376 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pjv6m" event={"ID":"122ebfd5-3b50-40a3-929c-0751226c5253","Type":"ContainerStarted","Data":"979de5ae4252bf341bc7d6a887ee0dfeeba2e5c4ff2cb539bd88d70eaa5eba88"} Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.313741 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-djs29"] Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.327582 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xkgxq" event={"ID":"3d458e72-0fea-4998-9f96-b2c4c5427f39","Type":"ContainerStarted","Data":"27b430a6a257f38e90f55217dd94f8e9421e4ee949556553ecee7f6a4c1405d5"} Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.340012 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5t8fw" event={"ID":"1e312683-699a-4ea1-9914-d9dc8b237cb4","Type":"ContainerStarted","Data":"e8da53027d3a9392aba6d4d474751819c569e383ff057ceab620d3978f124b29"} Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.365099 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5sd4x" podStartSLOduration=69.365070494 podStartE2EDuration="1m9.365070494s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:55.34881894 +0000 UTC m=+94.578761848" watchObservedRunningTime="2026-01-22 10:26:55.365070494 +0000 UTC m=+94.595013402" Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.375565 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-kpx2c" event={"ID":"bee93ff7-fb56-4cb1-846c-790c91498c6b","Type":"ContainerStarted","Data":"3719a4b6b6c9c8a0dc2ae3e5b8eba2a85c3ca2bb79bd9eba6a54589263b55ef1"} Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.375991 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.385259 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-pjv6m" podStartSLOduration=69.385233991 podStartE2EDuration="1m9.385233991s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:55.381395911 +0000 UTC m=+94.611338819" watchObservedRunningTime="2026-01-22 10:26:55.385233991 +0000 UTC m=+94.615176899" Jan 22 10:26:55 crc kubenswrapper[4752]: E0122 10:26:55.395839 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:55.895818967 +0000 UTC m=+95.125761875 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.432331 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-tbzz8" event={"ID":"cb8be535-82d3-4a30-b6aa-45058a58f30e","Type":"ContainerStarted","Data":"6f0bdac70bff29a4866f380843716dd0d985dece21c20cc6422ea6c77293e397"} Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.455597 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pt8rf" event={"ID":"b35e3ab6-301b-4b5d-80f6-ba8e1e301d60","Type":"ContainerStarted","Data":"ba9389d3fbdd06205333cff5f888f405f96a26ff48ac7a343d8aa27d78301794"} Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.489298 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g5js4" event={"ID":"48336b51-2e64-4f1e-a96f-5f866900ba2a","Type":"ContainerStarted","Data":"81f6cd6e8ebaa1fc9968ff9aeda70c15fd1d8eaa5feee6c6217dcb3c57302aa1"} Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.494131 4752 patch_prober.go:28] interesting pod/router-default-5444994796-ldnv8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 10:26:55 crc kubenswrapper[4752]: [-]has-synced failed: reason withheld Jan 22 10:26:55 crc kubenswrapper[4752]: [+]process-running ok Jan 22 10:26:55 crc kubenswrapper[4752]: healthz check failed Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.494183 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ldnv8" podUID="124095a6-83e3-45da-8bed-6ed5a8f6892b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.495239 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:55 crc kubenswrapper[4752]: E0122 10:26:55.496198 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:55.996184567 +0000 UTC m=+95.226127475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.552786 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qv9jt" event={"ID":"f1aaae5a-1812-407f-bc93-79edf6ef6476","Type":"ContainerStarted","Data":"f27616fdd0d137c378c4ad954f0e5387a8321f41dd925f06eb746273130342e9"} Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.552832 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qv9jt" event={"ID":"f1aaae5a-1812-407f-bc93-79edf6ef6476","Type":"ContainerStarted","Data":"dbdfa969ffb4f9a3464602d80eb6254fc169838c146b935b4c5a84258214175a"} Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.567437 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6sx2" event={"ID":"0797c211-fc73-476d-9dc2-383a5a9d1dcc","Type":"ContainerStarted","Data":"7510bd2d10f9a07fee2bbbf41e4de229d5b8d6fb6fc2050bb613adc110b00198"} Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.596524 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:55 crc kubenswrapper[4752]: E0122 10:26:55.596843 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:56.096830804 +0000 UTC m=+95.326773712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.614733 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2zchx" event={"ID":"1312dce6-4901-499e-a380-fcf84c6126c4","Type":"ContainerStarted","Data":"615333ee05bccd0f3ce57d54359217b5f1bd0e64a1131655ddc6126159fb99a7"} Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.643180 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf" event={"ID":"619941ce-9ead-4926-b8c0-f9108cd58462","Type":"ContainerStarted","Data":"b3a7f70c9d1a2f734c82191e37556f5d0b8c6c25c78d93d7b818bbdad2771e16"} Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.646523 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c5w5z" event={"ID":"be9bc31d-0a4b-4060-b43f-2f563b9c03b8","Type":"ContainerStarted","Data":"4c15bbed5fe12b511b70bb47c3ab5a535fe9957f7e7b4f7d9d747e332133f40d"} Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.647437 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r25qn" event={"ID":"57c9b12f-5d5e-4c8c-95fd-b9f592d9c4f8","Type":"ContainerStarted","Data":"7e31781ee9b3360158f30d2e05c6916f9e5c662732485424b2c400cd714a649d"} Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.647461 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r25qn" event={"ID":"57c9b12f-5d5e-4c8c-95fd-b9f592d9c4f8","Type":"ContainerStarted","Data":"c0c21328a0ce9e2f1c677c8d2ca40a27a8a7d9046d12efd6e12668e8880c7b48"} Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.648065 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r25qn" Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.648847 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gx9kk" event={"ID":"4bba5d5d-5eca-4d85-a64d-e127c8fe0e98","Type":"ContainerStarted","Data":"f406d88a768fb1ced4958a4463387aa1484bdc5bad568bef53f733ec8502eed2"} Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.651410 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-wmzmg" event={"ID":"2ed74970-560a-4f45-84e8-ebedcaf74392","Type":"ContainerStarted","Data":"86cff707c958ecdc71daacf4539412b109bce063d26f757cd8a1af90b6372b56"} Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.654911 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q88wn" event={"ID":"d0c57c83-4f36-4531-8f22-e3e37b49d843","Type":"ContainerStarted","Data":"ff7e36ba9fbc4109ce1a136c386902bf58207337554325620e303be1eaf6f285"} Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.656736 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5f5qf" event={"ID":"eab70863-0bea-4fb3-9265-045d0e2dff04","Type":"ContainerStarted","Data":"a2bf2a6f102165f7fda1e9a28082d5d811bb8437f8b5d726d39f59a6e7eda934"} Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.658123 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ll2c4" event={"ID":"8667d5ed-56e8-42ad-86ad-63aa962e7c96","Type":"ContainerStarted","Data":"02043cb77870c876bbeab9b40254e034d836478bdbb3dfceefae78a1741b48d8"} Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.659468 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mhb4k" event={"ID":"73d4e76d-7187-4ef1-a6c7-f5d8fcbf6dce","Type":"ContainerStarted","Data":"b4a61f052b500893e46db6932bd7e27cad607c3979501bdc6741f6057c7987c7"} Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.661394 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-nmbhc" event={"ID":"c3114586-1658-4733-b9f6-7a6ebcf25b46","Type":"ContainerStarted","Data":"38901799c301f8184e7ff30f3f19816ccf256840e5ef34cd9455d7cae0ae8f8e"} Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.661426 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-nmbhc" event={"ID":"c3114586-1658-4733-b9f6-7a6ebcf25b46","Type":"ContainerStarted","Data":"13f6864728fcbee86a834cde0f09e3b83b29d7190955900e7cdd3abc15b42d97"} Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.671890 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rcr55" event={"ID":"0b3563ce-e872-4f8d-b605-a6962b979d53","Type":"ContainerStarted","Data":"07216c832cae178e1935cc2c4086c810105eba5e6412b0cd02421d9f76ffe15c"} Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.671995 4752 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-r25qn container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.672133 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r25qn" podUID="57c9b12f-5d5e-4c8c-95fd-b9f592d9c4f8" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.674780 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-kpx2c" podStartSLOduration=69.674765269 podStartE2EDuration="1m9.674765269s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:55.423992722 +0000 UTC m=+94.653935630" watchObservedRunningTime="2026-01-22 10:26:55.674765269 +0000 UTC m=+94.904708177" Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.701325 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:55 crc kubenswrapper[4752]: E0122 10:26:55.701517 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:56.201501897 +0000 UTC m=+95.431444805 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.701639 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:55 crc kubenswrapper[4752]: E0122 10:26:55.701962 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:56.201954339 +0000 UTC m=+95.431897247 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.704918 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q88wn" podStartSLOduration=69.704906756 podStartE2EDuration="1m9.704906756s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:55.703866689 +0000 UTC m=+94.933809597" watchObservedRunningTime="2026-01-22 10:26:55.704906756 +0000 UTC m=+94.934849664" Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.705508 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf" podStartSLOduration=69.705501921 podStartE2EDuration="1m9.705501921s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:55.67556559 +0000 UTC m=+94.905508498" watchObservedRunningTime="2026-01-22 10:26:55.705501921 +0000 UTC m=+94.935444819" Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.709955 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2ct5b" Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.711549 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v6tj4" event={"ID":"bd7570aa-f486-408b-b0c6-83e0903fa3e8","Type":"ContainerStarted","Data":"1fcd93c039f028b7754d1ee3f0910f6ac80f0e41a76ba2d48656c96f3b30dda3"} Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.779883 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-5f5qf" podStartSLOduration=69.779844892 podStartE2EDuration="1m9.779844892s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:55.748511444 +0000 UTC m=+94.978454352" watchObservedRunningTime="2026-01-22 10:26:55.779844892 +0000 UTC m=+95.009787800" Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.782181 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r25qn" podStartSLOduration=69.782173433 podStartE2EDuration="1m9.782173433s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:55.779264377 +0000 UTC m=+95.009207295" watchObservedRunningTime="2026-01-22 10:26:55.782173433 +0000 UTC m=+95.012116341" Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.787398 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.787723 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.806304 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:55 crc kubenswrapper[4752]: E0122 10:26:55.809097 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:56.309081955 +0000 UTC m=+95.539024863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.834762 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-nmbhc" podStartSLOduration=69.834746915 podStartE2EDuration="1m9.834746915s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:55.831451249 +0000 UTC m=+95.061394157" watchObservedRunningTime="2026-01-22 10:26:55.834746915 +0000 UTC m=+95.064689823" Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.870982 4752 patch_prober.go:28] interesting pod/apiserver-76f77b778f-m8c4r container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 22 10:26:55 crc kubenswrapper[4752]: [+]log ok Jan 22 10:26:55 crc kubenswrapper[4752]: [+]etcd ok Jan 22 10:26:55 crc kubenswrapper[4752]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 22 10:26:55 crc kubenswrapper[4752]: [+]poststarthook/generic-apiserver-start-informers ok Jan 22 10:26:55 crc kubenswrapper[4752]: [+]poststarthook/max-in-flight-filter ok Jan 22 10:26:55 crc kubenswrapper[4752]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 22 10:26:55 crc kubenswrapper[4752]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 22 10:26:55 crc kubenswrapper[4752]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 22 10:26:55 crc kubenswrapper[4752]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 22 10:26:55 crc kubenswrapper[4752]: [+]poststarthook/project.openshift.io-projectcache ok Jan 22 10:26:55 crc kubenswrapper[4752]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 22 10:26:55 crc kubenswrapper[4752]: [+]poststarthook/openshift.io-startinformers ok Jan 22 10:26:55 crc kubenswrapper[4752]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 22 10:26:55 crc kubenswrapper[4752]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 22 10:26:55 crc kubenswrapper[4752]: livez check failed Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.871056 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" podUID="315cb527-e73b-4e1f-bc55-09c5c694cef9" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 10:26:55 crc kubenswrapper[4752]: E0122 10:26:55.915362 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:56.415346569 +0000 UTC m=+95.645289477 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.918338 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:55 crc kubenswrapper[4752]: I0122 10:26:55.942483 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v6tj4" podStartSLOduration=69.942464747 podStartE2EDuration="1m9.942464747s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:55.881305101 +0000 UTC m=+95.111248019" watchObservedRunningTime="2026-01-22 10:26:55.942464747 +0000 UTC m=+95.172407655" Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.019038 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:56 crc kubenswrapper[4752]: E0122 10:26:56.019418 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:56.519403465 +0000 UTC m=+95.749346373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.053060 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf" Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.053107 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf" Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.069750 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf" Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.128829 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:56 crc kubenswrapper[4752]: E0122 10:26:56.129292 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:56.629275884 +0000 UTC m=+95.859218802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.231162 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:56 crc kubenswrapper[4752]: E0122 10:26:56.231569 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:56.731553084 +0000 UTC m=+95.961495992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.332458 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:56 crc kubenswrapper[4752]: E0122 10:26:56.333208 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:56.833194977 +0000 UTC m=+96.063137885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.434450 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:56 crc kubenswrapper[4752]: E0122 10:26:56.434780 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:56.934762428 +0000 UTC m=+96.164705336 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.457670 4752 patch_prober.go:28] interesting pod/router-default-5444994796-ldnv8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 10:26:56 crc kubenswrapper[4752]: [-]has-synced failed: reason withheld Jan 22 10:26:56 crc kubenswrapper[4752]: [+]process-running ok Jan 22 10:26:56 crc kubenswrapper[4752]: healthz check failed Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.457755 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ldnv8" podUID="124095a6-83e3-45da-8bed-6ed5a8f6892b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.535727 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:56 crc kubenswrapper[4752]: E0122 10:26:56.536052 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:57.036034752 +0000 UTC m=+96.265977660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.637624 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:56 crc kubenswrapper[4752]: E0122 10:26:56.638106 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:57.138087566 +0000 UTC m=+96.368030474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.738693 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:56 crc kubenswrapper[4752]: E0122 10:26:56.739085 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:57.239072962 +0000 UTC m=+96.469015870 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.746246 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ll2c4" event={"ID":"8667d5ed-56e8-42ad-86ad-63aa962e7c96","Type":"ContainerStarted","Data":"7ab2f43bd7104bda5be552a29875db2ba835277f9f7aefa1e42bf9ed6bfa3285"} Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.746294 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ll2c4" event={"ID":"8667d5ed-56e8-42ad-86ad-63aa962e7c96","Type":"ContainerStarted","Data":"9dc7e32dcf5dcff4b6358e94b4957bf3f34ed3588e8b810dcb68136703dab027"} Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.765900 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gx9kk" event={"ID":"4bba5d5d-5eca-4d85-a64d-e127c8fe0e98","Type":"ContainerStarted","Data":"7faf98919cf1ce94830ec9853f51cd485e85d8422b5cdca0b02a264d72e5af59"} Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.768154 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-gx9kk" Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.769967 4752 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gx9kk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.770007 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-gx9kk" podUID="4bba5d5d-5eca-4d85-a64d-e127c8fe0e98" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.785935 4752 csr.go:261] certificate signing request csr-p7w2h is approved, waiting to be issued Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.799097 4752 csr.go:257] certificate signing request csr-p7w2h is issued Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.812344 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hrggg" event={"ID":"f29c0509-af02-4b97-981a-cc0f24848953","Type":"ContainerStarted","Data":"8ee30a5828aa3f316b08192f319a2ddfff482d37404d6f73eabcfb00c7973cc9"} Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.812383 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hrggg" event={"ID":"f29c0509-af02-4b97-981a-cc0f24848953","Type":"ContainerStarted","Data":"3be0233d4127c5c662c3ddfaa11e7dd862bde745cd8c1a6e7123b75240423720"} Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.812949 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hrggg" Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.837837 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ll2c4" podStartSLOduration=70.83781869 podStartE2EDuration="1m10.83781869s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:56.807139659 +0000 UTC m=+96.037082567" watchObservedRunningTime="2026-01-22 10:26:56.83781869 +0000 UTC m=+96.067761598" Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.839294 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:56 crc kubenswrapper[4752]: E0122 10:26:56.840297 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:57.340281174 +0000 UTC m=+96.570224082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.866185 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-gx9kk" podStartSLOduration=70.866164139 podStartE2EDuration="1m10.866164139s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:56.838399945 +0000 UTC m=+96.068342853" watchObservedRunningTime="2026-01-22 10:26:56.866164139 +0000 UTC m=+96.096107047" Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.866450 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mhb4k" event={"ID":"73d4e76d-7187-4ef1-a6c7-f5d8fcbf6dce","Type":"ContainerStarted","Data":"49243a9a01a6c34059a8f1e5e413ef6c63394ea3774901b1186a17102203aa8d"} Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.866489 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mhb4k" event={"ID":"73d4e76d-7187-4ef1-a6c7-f5d8fcbf6dce","Type":"ContainerStarted","Data":"15f1b89421a4fb52102ef9d17fe61c834098d226dbe7cc63daddcc7b0dc8b229"} Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.893814 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hrggg" podStartSLOduration=70.89379756 podStartE2EDuration="1m10.89379756s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:56.866609601 +0000 UTC m=+96.096552509" watchObservedRunningTime="2026-01-22 10:26:56.89379756 +0000 UTC m=+96.123740468" Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.894973 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-mhb4k" podStartSLOduration=70.894965971 podStartE2EDuration="1m10.894965971s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:56.893172214 +0000 UTC m=+96.123115122" watchObservedRunningTime="2026-01-22 10:26:56.894965971 +0000 UTC m=+96.124908869" Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.896148 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wgf7v" event={"ID":"2ac0bbe2-2764-4862-9b13-9940b8720dcf","Type":"ContainerStarted","Data":"b590031d15cdc23472ea79e60e1364368ce5a88f49c207e5b83891b20ad6969f"} Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.922180 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6sx2" event={"ID":"0797c211-fc73-476d-9dc2-383a5a9d1dcc","Type":"ContainerStarted","Data":"2138ed134fc8f29039a02dcdac9c8bac8bc3b5846adba1fe5d3c0ea14bb9b856"} Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.924213 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6sx2" Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.942366 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.944047 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wgf7v" podStartSLOduration=70.944019001 podStartE2EDuration="1m10.944019001s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:56.941573427 +0000 UTC m=+96.171516335" watchObservedRunningTime="2026-01-22 10:26:56.944019001 +0000 UTC m=+96.173961909" Jan 22 10:26:56 crc kubenswrapper[4752]: E0122 10:26:56.944148 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:57.444134604 +0000 UTC m=+96.674077512 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.957201 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c5w5z" event={"ID":"be9bc31d-0a4b-4060-b43f-2f563b9c03b8","Type":"ContainerStarted","Data":"2970318e266e8e96a09d491d0d48fe1596305e61b506a4eec7c0ff89cb12c2ce"} Jan 22 10:26:56 crc kubenswrapper[4752]: I0122 10:26:56.989033 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qv9jt" event={"ID":"f1aaae5a-1812-407f-bc93-79edf6ef6476","Type":"ContainerStarted","Data":"38b01f95b84424bf253f77fd3b8a62816d328ebb037c4a043e31948d398a442c"} Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.041368 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6sx2" podStartSLOduration=71.041349712 podStartE2EDuration="1m11.041349712s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:57.009443719 +0000 UTC m=+96.239386627" watchObservedRunningTime="2026-01-22 10:26:57.041349712 +0000 UTC m=+96.271292620" Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.042193 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c5w5z" podStartSLOduration=71.042187984 podStartE2EDuration="1m11.042187984s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:57.040038888 +0000 UTC m=+96.269981796" watchObservedRunningTime="2026-01-22 10:26:57.042187984 +0000 UTC m=+96.272130892" Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.043766 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:57 crc kubenswrapper[4752]: E0122 10:26:57.044032 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:57.543889268 +0000 UTC m=+96.773832176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.044237 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:57 crc kubenswrapper[4752]: E0122 10:26:57.046351 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:57.546338712 +0000 UTC m=+96.776281620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.081284 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5t8fw" event={"ID":"1e312683-699a-4ea1-9914-d9dc8b237cb4","Type":"ContainerStarted","Data":"edb1d4dbad78d5133c720721dcfc6b05f9ade778cbffda85b5d8adecc32053a8"} Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.081622 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5t8fw" event={"ID":"1e312683-699a-4ea1-9914-d9dc8b237cb4","Type":"ContainerStarted","Data":"6c6a37f5cc92119c8596dd38bdedaa97124f51ed139456a570c4ae1c1de8f9b3"} Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.084614 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-tbzz8" event={"ID":"cb8be535-82d3-4a30-b6aa-45058a58f30e","Type":"ContainerStarted","Data":"8da860e7506b08885141b7ab6ba94fdd135f21e5b1a286c9071ecd25aa025ed4"} Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.085148 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-tbzz8" Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.086091 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pt8rf" event={"ID":"b35e3ab6-301b-4b5d-80f6-ba8e1e301d60","Type":"ContainerStarted","Data":"2fcfb8d9771728713a0d13bb35a27b3b6cf21d25be9f21a4319f66c0d2a420c2"} Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.087221 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-wmzmg" event={"ID":"2ed74970-560a-4f45-84e8-ebedcaf74392","Type":"ContainerStarted","Data":"143b79a97f47ed7bf1634ee1af19726cf8d6eb4d7a54e090d20294d534a7338b"} Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.110280 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qv9jt" podStartSLOduration=71.110264821 podStartE2EDuration="1m11.110264821s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:57.093233187 +0000 UTC m=+96.323176095" watchObservedRunningTime="2026-01-22 10:26:57.110264821 +0000 UTC m=+96.340207729" Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.124502 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xkgxq" event={"ID":"3d458e72-0fea-4998-9f96-b2c4c5427f39","Type":"ContainerStarted","Data":"0476370af284133a9b329a8aad36bef24ed147aacd89729f29baff0aa24299d5"} Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.124537 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q88wn" event={"ID":"d0c57c83-4f36-4531-8f22-e3e37b49d843","Type":"ContainerStarted","Data":"03f00a2dd657d3205c034ecc995df90af3ebd82d13a58144353101c07f2705cc"} Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.124551 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xkgxq" Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.124562 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-pnz94" event={"ID":"ca13be73-3b56-4d7d-aaf4-547e7fbcec5f","Type":"ContainerStarted","Data":"994245f185af1c8523968d7a67e13da21f810cabb2a79f9df4c48d242536e73c"} Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.145050 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:57 crc kubenswrapper[4752]: E0122 10:26:57.146302 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:57.646287881 +0000 UTC m=+96.876230789 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.147119 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xkgxq" Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.158924 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-pt8rf" podStartSLOduration=8.158907841 podStartE2EDuration="8.158907841s" podCreationTimestamp="2026-01-22 10:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:57.157103344 +0000 UTC m=+96.387046262" watchObservedRunningTime="2026-01-22 10:26:57.158907841 +0000 UTC m=+96.388850749" Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.160136 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2zchx" event={"ID":"1312dce6-4901-499e-a380-fcf84c6126c4","Type":"ContainerStarted","Data":"2f3c5f5208d1a42c00cf66d1329d68605e560a96949e591eb0ee69b8228ee9fe"} Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.160185 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2zchx" event={"ID":"1312dce6-4901-499e-a380-fcf84c6126c4","Type":"ContainerStarted","Data":"78c1835c1830fa34ddeeb8248c54e6b5b5e3173c448b7c63dddbf0920a262c9f"} Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.188596 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-djs29" event={"ID":"f6c2b2d2-d728-4416-aebd-a4ce23716f41","Type":"ContainerStarted","Data":"9c2343832410978187a3be92d204e2fb23056728449cbab2fc30b5dda054ddd4"} Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.188633 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-djs29" event={"ID":"f6c2b2d2-d728-4416-aebd-a4ce23716f41","Type":"ContainerStarted","Data":"9128fff15975319fac062d2f2afc8d1d2c4ee02b765dd5ecfc1566ea4f2917f0"} Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.188645 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-djs29" event={"ID":"f6c2b2d2-d728-4416-aebd-a4ce23716f41","Type":"ContainerStarted","Data":"40732a467e908d93f9b75d4f47ff4284fb81b7c221a7f323c770deb6f7fe8d65"} Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.203240 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6ghbf" Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.209010 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-tbzz8" podStartSLOduration=8.208992378 podStartE2EDuration="8.208992378s" podCreationTimestamp="2026-01-22 10:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:57.20635691 +0000 UTC m=+96.436299818" watchObservedRunningTime="2026-01-22 10:26:57.208992378 +0000 UTC m=+96.438935286" Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.239965 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-wmzmg" podStartSLOduration=71.239935196 podStartE2EDuration="1m11.239935196s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:57.238337624 +0000 UTC m=+96.468280532" watchObservedRunningTime="2026-01-22 10:26:57.239935196 +0000 UTC m=+96.469878104" Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.246427 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:57 crc kubenswrapper[4752]: E0122 10:26:57.256614 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:57.756597531 +0000 UTC m=+96.986540439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.287376 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5t8fw" podStartSLOduration=71.287353124 podStartE2EDuration="1m11.287353124s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:57.286965734 +0000 UTC m=+96.516908642" watchObservedRunningTime="2026-01-22 10:26:57.287353124 +0000 UTC m=+96.517296032" Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.299465 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r25qn" Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.322734 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xkgxq" podStartSLOduration=71.322716627 podStartE2EDuration="1m11.322716627s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:57.319885763 +0000 UTC m=+96.549828681" watchObservedRunningTime="2026-01-22 10:26:57.322716627 +0000 UTC m=+96.552659535" Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.351832 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:57 crc kubenswrapper[4752]: E0122 10:26:57.353395 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:57.853379628 +0000 UTC m=+97.083322536 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.365359 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-pnz94" podStartSLOduration=71.36533749 podStartE2EDuration="1m11.36533749s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:57.364018135 +0000 UTC m=+96.593961053" watchObservedRunningTime="2026-01-22 10:26:57.36533749 +0000 UTC m=+96.595280398" Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.459690 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:57 crc kubenswrapper[4752]: E0122 10:26:57.460054 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:57.960041742 +0000 UTC m=+97.189984650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.460076 4752 patch_prober.go:28] interesting pod/router-default-5444994796-ldnv8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 10:26:57 crc kubenswrapper[4752]: [-]has-synced failed: reason withheld Jan 22 10:26:57 crc kubenswrapper[4752]: [+]process-running ok Jan 22 10:26:57 crc kubenswrapper[4752]: healthz check failed Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.460135 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ldnv8" podUID="124095a6-83e3-45da-8bed-6ed5a8f6892b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.530674 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-djs29" podStartSLOduration=71.530654455 podStartE2EDuration="1m11.530654455s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:57.489299716 +0000 UTC m=+96.719242654" watchObservedRunningTime="2026-01-22 10:26:57.530654455 +0000 UTC m=+96.760597363" Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.531204 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2zchx" podStartSLOduration=71.531198189 podStartE2EDuration="1m11.531198189s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:26:57.51514676 +0000 UTC m=+96.745089668" watchObservedRunningTime="2026-01-22 10:26:57.531198189 +0000 UTC m=+96.761141097" Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.560696 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:57 crc kubenswrapper[4752]: E0122 10:26:57.561043 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:58.061017978 +0000 UTC m=+97.290960886 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.561187 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:57 crc kubenswrapper[4752]: E0122 10:26:57.561472 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:58.061460189 +0000 UTC m=+97.291403097 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.662557 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:57 crc kubenswrapper[4752]: E0122 10:26:57.662942 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:58.162926728 +0000 UTC m=+97.392869626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.764089 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:57 crc kubenswrapper[4752]: E0122 10:26:57.764474 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:58.264455289 +0000 UTC m=+97.494398217 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.800163 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-22 10:21:56 +0000 UTC, rotation deadline is 2026-11-26 15:15:18.075445516 +0000 UTC Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.800210 4752 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7396h48m20.275237824s for next certificate rotation Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.865362 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:57 crc kubenswrapper[4752]: E0122 10:26:57.865540 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:58.365510227 +0000 UTC m=+97.595453135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.865689 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:57 crc kubenswrapper[4752]: E0122 10:26:57.866059 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:58.366046761 +0000 UTC m=+97.595989669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.923353 4752 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-g6sx2 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.923423 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6sx2" podUID="0797c211-fc73-476d-9dc2-383a5a9d1dcc" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.29:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.966599 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:57 crc kubenswrapper[4752]: E0122 10:26:57.966749 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:58.466722259 +0000 UTC m=+97.696665187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:57 crc kubenswrapper[4752]: I0122 10:26:57.966828 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:57 crc kubenswrapper[4752]: E0122 10:26:57.967171 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:58.46715407 +0000 UTC m=+97.697097018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.068365 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z4qst"] Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.068616 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:58 crc kubenswrapper[4752]: E0122 10:26:58.068787 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:58.568756332 +0000 UTC m=+97.798699240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.069084 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:58 crc kubenswrapper[4752]: E0122 10:26:58.069470 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:58.569460501 +0000 UTC m=+97.799403409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.070199 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z4qst" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.072333 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.080411 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z4qst"] Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.170206 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:58 crc kubenswrapper[4752]: E0122 10:26:58.170425 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:58.670390225 +0000 UTC m=+97.900333143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.170502 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.170542 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct8vw\" (UniqueName: \"kubernetes.io/projected/d7873b3a-ffed-4c96-818c-90117b142098-kube-api-access-ct8vw\") pod \"community-operators-z4qst\" (UID: \"d7873b3a-ffed-4c96-818c-90117b142098\") " pod="openshift-marketplace/community-operators-z4qst" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.170578 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7873b3a-ffed-4c96-818c-90117b142098-utilities\") pod \"community-operators-z4qst\" (UID: \"d7873b3a-ffed-4c96-818c-90117b142098\") " pod="openshift-marketplace/community-operators-z4qst" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.170595 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7873b3a-ffed-4c96-818c-90117b142098-catalog-content\") pod \"community-operators-z4qst\" (UID: \"d7873b3a-ffed-4c96-818c-90117b142098\") " pod="openshift-marketplace/community-operators-z4qst" Jan 22 10:26:58 crc kubenswrapper[4752]: E0122 10:26:58.170972 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:58.67095419 +0000 UTC m=+97.900897098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.194255 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g5js4" event={"ID":"48336b51-2e64-4f1e-a96f-5f866900ba2a","Type":"ContainerStarted","Data":"21ac771938d78ced6a8d72cd5cd796f13dc5853505e1143a5917b260d702ae39"} Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.196181 4752 generic.go:334] "Generic (PLEG): container finished" podID="2ed74970-560a-4f45-84e8-ebedcaf74392" containerID="143b79a97f47ed7bf1634ee1af19726cf8d6eb4d7a54e090d20294d534a7338b" exitCode=0 Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.196226 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-wmzmg" event={"ID":"2ed74970-560a-4f45-84e8-ebedcaf74392","Type":"ContainerDied","Data":"143b79a97f47ed7bf1634ee1af19726cf8d6eb4d7a54e090d20294d534a7338b"} Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.199005 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hrggg" event={"ID":"f29c0509-af02-4b97-981a-cc0f24848953","Type":"ContainerStarted","Data":"b7b0a6323204c11b31640e1bbd31b8d6f267958c080ed395399fcb7ca8496285"} Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.201494 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-tbzz8" event={"ID":"cb8be535-82d3-4a30-b6aa-45058a58f30e","Type":"ContainerStarted","Data":"1cf737df86fe54020567750cd0be87c41927846eac10e95753a9643bd1651923"} Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.202070 4752 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gx9kk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.202108 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-gx9kk" podUID="4bba5d5d-5eca-4d85-a64d-e127c8fe0e98" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.207362 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g6sx2" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.272121 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.272796 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct8vw\" (UniqueName: \"kubernetes.io/projected/d7873b3a-ffed-4c96-818c-90117b142098-kube-api-access-ct8vw\") pod \"community-operators-z4qst\" (UID: \"d7873b3a-ffed-4c96-818c-90117b142098\") " pod="openshift-marketplace/community-operators-z4qst" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.272961 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7873b3a-ffed-4c96-818c-90117b142098-utilities\") pod \"community-operators-z4qst\" (UID: \"d7873b3a-ffed-4c96-818c-90117b142098\") " pod="openshift-marketplace/community-operators-z4qst" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.272978 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7873b3a-ffed-4c96-818c-90117b142098-catalog-content\") pod \"community-operators-z4qst\" (UID: \"d7873b3a-ffed-4c96-818c-90117b142098\") " pod="openshift-marketplace/community-operators-z4qst" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.273926 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7873b3a-ffed-4c96-818c-90117b142098-catalog-content\") pod \"community-operators-z4qst\" (UID: \"d7873b3a-ffed-4c96-818c-90117b142098\") " pod="openshift-marketplace/community-operators-z4qst" Jan 22 10:26:58 crc kubenswrapper[4752]: E0122 10:26:58.274371 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:58.774341419 +0000 UTC m=+98.004284487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.274937 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7873b3a-ffed-4c96-818c-90117b142098-utilities\") pod \"community-operators-z4qst\" (UID: \"d7873b3a-ffed-4c96-818c-90117b142098\") " pod="openshift-marketplace/community-operators-z4qst" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.300002 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct8vw\" (UniqueName: \"kubernetes.io/projected/d7873b3a-ffed-4c96-818c-90117b142098-kube-api-access-ct8vw\") pod \"community-operators-z4qst\" (UID: \"d7873b3a-ffed-4c96-818c-90117b142098\") " pod="openshift-marketplace/community-operators-z4qst" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.302369 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x65n2"] Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.308307 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x65n2" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.312230 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.348986 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x65n2"] Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.379300 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.379346 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb0789a0-f347-4a6f-ba09-cd7cb558d5b5-utilities\") pod \"certified-operators-x65n2\" (UID: \"eb0789a0-f347-4a6f-ba09-cd7cb558d5b5\") " pod="openshift-marketplace/certified-operators-x65n2" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.379380 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb0789a0-f347-4a6f-ba09-cd7cb558d5b5-catalog-content\") pod \"certified-operators-x65n2\" (UID: \"eb0789a0-f347-4a6f-ba09-cd7cb558d5b5\") " pod="openshift-marketplace/certified-operators-x65n2" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.379424 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8mp8\" (UniqueName: \"kubernetes.io/projected/eb0789a0-f347-4a6f-ba09-cd7cb558d5b5-kube-api-access-g8mp8\") pod \"certified-operators-x65n2\" (UID: \"eb0789a0-f347-4a6f-ba09-cd7cb558d5b5\") " pod="openshift-marketplace/certified-operators-x65n2" Jan 22 10:26:58 crc kubenswrapper[4752]: E0122 10:26:58.379806 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:58.879794032 +0000 UTC m=+98.109736940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.415295 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z4qst" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.469687 4752 patch_prober.go:28] interesting pod/router-default-5444994796-ldnv8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 10:26:58 crc kubenswrapper[4752]: [-]has-synced failed: reason withheld Jan 22 10:26:58 crc kubenswrapper[4752]: [+]process-running ok Jan 22 10:26:58 crc kubenswrapper[4752]: healthz check failed Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.469773 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ldnv8" podUID="124095a6-83e3-45da-8bed-6ed5a8f6892b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.483283 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.483596 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb0789a0-f347-4a6f-ba09-cd7cb558d5b5-utilities\") pod \"certified-operators-x65n2\" (UID: \"eb0789a0-f347-4a6f-ba09-cd7cb558d5b5\") " pod="openshift-marketplace/certified-operators-x65n2" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.483641 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb0789a0-f347-4a6f-ba09-cd7cb558d5b5-catalog-content\") pod \"certified-operators-x65n2\" (UID: \"eb0789a0-f347-4a6f-ba09-cd7cb558d5b5\") " pod="openshift-marketplace/certified-operators-x65n2" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.483688 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8mp8\" (UniqueName: \"kubernetes.io/projected/eb0789a0-f347-4a6f-ba09-cd7cb558d5b5-kube-api-access-g8mp8\") pod \"certified-operators-x65n2\" (UID: \"eb0789a0-f347-4a6f-ba09-cd7cb558d5b5\") " pod="openshift-marketplace/certified-operators-x65n2" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.484231 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb0789a0-f347-4a6f-ba09-cd7cb558d5b5-utilities\") pod \"certified-operators-x65n2\" (UID: \"eb0789a0-f347-4a6f-ba09-cd7cb558d5b5\") " pod="openshift-marketplace/certified-operators-x65n2" Jan 22 10:26:58 crc kubenswrapper[4752]: E0122 10:26:58.484415 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:58.984391782 +0000 UTC m=+98.214334690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.484506 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb0789a0-f347-4a6f-ba09-cd7cb558d5b5-catalog-content\") pod \"certified-operators-x65n2\" (UID: \"eb0789a0-f347-4a6f-ba09-cd7cb558d5b5\") " pod="openshift-marketplace/certified-operators-x65n2" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.547713 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4r8gj"] Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.548666 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4r8gj" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.576986 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8mp8\" (UniqueName: \"kubernetes.io/projected/eb0789a0-f347-4a6f-ba09-cd7cb558d5b5-kube-api-access-g8mp8\") pod \"certified-operators-x65n2\" (UID: \"eb0789a0-f347-4a6f-ba09-cd7cb558d5b5\") " pod="openshift-marketplace/certified-operators-x65n2" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.585893 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:58 crc kubenswrapper[4752]: E0122 10:26:58.586360 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:59.086341524 +0000 UTC m=+98.316284432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.612518 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4r8gj"] Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.660430 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x65n2" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.688525 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.689042 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d035039-51b4-41aa-9e33-db0a1ca24332-catalog-content\") pod \"community-operators-4r8gj\" (UID: \"0d035039-51b4-41aa-9e33-db0a1ca24332\") " pod="openshift-marketplace/community-operators-4r8gj" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.689193 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d035039-51b4-41aa-9e33-db0a1ca24332-utilities\") pod \"community-operators-4r8gj\" (UID: \"0d035039-51b4-41aa-9e33-db0a1ca24332\") " pod="openshift-marketplace/community-operators-4r8gj" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.689374 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kttwz\" (UniqueName: \"kubernetes.io/projected/0d035039-51b4-41aa-9e33-db0a1ca24332-kube-api-access-kttwz\") pod \"community-operators-4r8gj\" (UID: \"0d035039-51b4-41aa-9e33-db0a1ca24332\") " pod="openshift-marketplace/community-operators-4r8gj" Jan 22 10:26:58 crc kubenswrapper[4752]: E0122 10:26:58.689567 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:59.189547498 +0000 UTC m=+98.419490406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.710886 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x72kv"] Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.712022 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x72kv" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.715146 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x72kv"] Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.800831 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:58 crc kubenswrapper[4752]: E0122 10:26:58.802386 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:59.302368073 +0000 UTC m=+98.532310981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.803229 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kttwz\" (UniqueName: \"kubernetes.io/projected/0d035039-51b4-41aa-9e33-db0a1ca24332-kube-api-access-kttwz\") pod \"community-operators-4r8gj\" (UID: \"0d035039-51b4-41aa-9e33-db0a1ca24332\") " pod="openshift-marketplace/community-operators-4r8gj" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.803934 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d035039-51b4-41aa-9e33-db0a1ca24332-catalog-content\") pod \"community-operators-4r8gj\" (UID: \"0d035039-51b4-41aa-9e33-db0a1ca24332\") " pod="openshift-marketplace/community-operators-4r8gj" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.803993 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71357b8c-126d-4119-943e-653febd0612d-utilities\") pod \"certified-operators-x72kv\" (UID: \"71357b8c-126d-4119-943e-653febd0612d\") " pod="openshift-marketplace/certified-operators-x72kv" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.804037 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d035039-51b4-41aa-9e33-db0a1ca24332-utilities\") pod \"community-operators-4r8gj\" (UID: \"0d035039-51b4-41aa-9e33-db0a1ca24332\") " pod="openshift-marketplace/community-operators-4r8gj" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.804055 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xks7\" (UniqueName: \"kubernetes.io/projected/71357b8c-126d-4119-943e-653febd0612d-kube-api-access-7xks7\") pod \"certified-operators-x72kv\" (UID: \"71357b8c-126d-4119-943e-653febd0612d\") " pod="openshift-marketplace/certified-operators-x72kv" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.804120 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71357b8c-126d-4119-943e-653febd0612d-catalog-content\") pod \"certified-operators-x72kv\" (UID: \"71357b8c-126d-4119-943e-653febd0612d\") " pod="openshift-marketplace/certified-operators-x72kv" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.804892 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d035039-51b4-41aa-9e33-db0a1ca24332-catalog-content\") pod \"community-operators-4r8gj\" (UID: \"0d035039-51b4-41aa-9e33-db0a1ca24332\") " pod="openshift-marketplace/community-operators-4r8gj" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.812619 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d035039-51b4-41aa-9e33-db0a1ca24332-utilities\") pod \"community-operators-4r8gj\" (UID: \"0d035039-51b4-41aa-9e33-db0a1ca24332\") " pod="openshift-marketplace/community-operators-4r8gj" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.837796 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kttwz\" (UniqueName: \"kubernetes.io/projected/0d035039-51b4-41aa-9e33-db0a1ca24332-kube-api-access-kttwz\") pod \"community-operators-4r8gj\" (UID: \"0d035039-51b4-41aa-9e33-db0a1ca24332\") " pod="openshift-marketplace/community-operators-4r8gj" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.887182 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4r8gj" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.905364 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.905560 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71357b8c-126d-4119-943e-653febd0612d-catalog-content\") pod \"certified-operators-x72kv\" (UID: \"71357b8c-126d-4119-943e-653febd0612d\") " pod="openshift-marketplace/certified-operators-x72kv" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.905641 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71357b8c-126d-4119-943e-653febd0612d-utilities\") pod \"certified-operators-x72kv\" (UID: \"71357b8c-126d-4119-943e-653febd0612d\") " pod="openshift-marketplace/certified-operators-x72kv" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.905676 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xks7\" (UniqueName: \"kubernetes.io/projected/71357b8c-126d-4119-943e-653febd0612d-kube-api-access-7xks7\") pod \"certified-operators-x72kv\" (UID: \"71357b8c-126d-4119-943e-653febd0612d\") " pod="openshift-marketplace/certified-operators-x72kv" Jan 22 10:26:58 crc kubenswrapper[4752]: E0122 10:26:58.906069 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:59.40605244 +0000 UTC m=+98.635995348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.906401 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71357b8c-126d-4119-943e-653febd0612d-catalog-content\") pod \"certified-operators-x72kv\" (UID: \"71357b8c-126d-4119-943e-653febd0612d\") " pod="openshift-marketplace/certified-operators-x72kv" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.906608 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71357b8c-126d-4119-943e-653febd0612d-utilities\") pod \"certified-operators-x72kv\" (UID: \"71357b8c-126d-4119-943e-653febd0612d\") " pod="openshift-marketplace/certified-operators-x72kv" Jan 22 10:26:58 crc kubenswrapper[4752]: I0122 10:26:58.950062 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xks7\" (UniqueName: \"kubernetes.io/projected/71357b8c-126d-4119-943e-653febd0612d-kube-api-access-7xks7\") pod \"certified-operators-x72kv\" (UID: \"71357b8c-126d-4119-943e-653febd0612d\") " pod="openshift-marketplace/certified-operators-x72kv" Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.007829 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:59 crc kubenswrapper[4752]: E0122 10:26:59.008662 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:59.508641878 +0000 UTC m=+98.738584786 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.037156 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x72kv" Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.109298 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:59 crc kubenswrapper[4752]: E0122 10:26:59.109788 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:59.609751417 +0000 UTC m=+98.839694325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.178090 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x65n2"] Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.215660 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:59 crc kubenswrapper[4752]: E0122 10:26:59.216141 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:59.716127834 +0000 UTC m=+98.946070742 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.238832 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z4qst"] Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.245143 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g5js4" event={"ID":"48336b51-2e64-4f1e-a96f-5f866900ba2a","Type":"ContainerStarted","Data":"a065d89cb476643e7fe87557120eb158eb8051a1d792eecc227128cdfeca4755"} Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.254910 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-gx9kk" Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.278616 4752 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.316619 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:59 crc kubenswrapper[4752]: E0122 10:26:59.317035 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:26:59.817019138 +0000 UTC m=+99.046962046 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.333195 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4r8gj"] Jan 22 10:26:59 crc kubenswrapper[4752]: W0122 10:26:59.395870 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d035039_51b4_41aa_9e33_db0a1ca24332.slice/crio-7fc145b0f3a6b9e1ea9ca0e6f4b916ea75905dc7316279fb15663dbfb76b5162 WatchSource:0}: Error finding container 7fc145b0f3a6b9e1ea9ca0e6f4b916ea75905dc7316279fb15663dbfb76b5162: Status 404 returned error can't find the container with id 7fc145b0f3a6b9e1ea9ca0e6f4b916ea75905dc7316279fb15663dbfb76b5162 Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.419181 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:59 crc kubenswrapper[4752]: E0122 10:26:59.422128 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:26:59.922104921 +0000 UTC m=+99.152047829 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.459491 4752 patch_prober.go:28] interesting pod/router-default-5444994796-ldnv8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 10:26:59 crc kubenswrapper[4752]: [-]has-synced failed: reason withheld Jan 22 10:26:59 crc kubenswrapper[4752]: [+]process-running ok Jan 22 10:26:59 crc kubenswrapper[4752]: healthz check failed Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.459533 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ldnv8" podUID="124095a6-83e3-45da-8bed-6ed5a8f6892b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.505065 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x72kv"] Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.525483 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:59 crc kubenswrapper[4752]: E0122 10:26:59.526681 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:27:00.02665399 +0000 UTC m=+99.256596898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.628104 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:59 crc kubenswrapper[4752]: E0122 10:26:59.628615 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:27:00.128589651 +0000 UTC m=+99.358532559 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:59 crc kubenswrapper[4752]: W0122 10:26:59.634765 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71357b8c_126d_4119_943e_653febd0612d.slice/crio-51ceae9984997bc02fe267c283d7e785a03733b91c1976afd2af77472d8af0ea WatchSource:0}: Error finding container 51ceae9984997bc02fe267c283d7e785a03733b91c1976afd2af77472d8af0ea: Status 404 returned error can't find the container with id 51ceae9984997bc02fe267c283d7e785a03733b91c1976afd2af77472d8af0ea Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.653929 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-wmzmg" Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.729488 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:59 crc kubenswrapper[4752]: E0122 10:26:59.729677 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:27:00.229638729 +0000 UTC m=+99.459581647 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.729800 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:59 crc kubenswrapper[4752]: E0122 10:26:59.730233 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:27:00.230222304 +0000 UTC m=+99.460165212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.831234 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:59 crc kubenswrapper[4752]: E0122 10:26:59.831440 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:27:00.331406356 +0000 UTC m=+99.561349264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.831926 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ed74970-560a-4f45-84e8-ebedcaf74392-config-volume\") pod \"2ed74970-560a-4f45-84e8-ebedcaf74392\" (UID: \"2ed74970-560a-4f45-84e8-ebedcaf74392\") " Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.832007 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2ed74970-560a-4f45-84e8-ebedcaf74392-secret-volume\") pod \"2ed74970-560a-4f45-84e8-ebedcaf74392\" (UID: \"2ed74970-560a-4f45-84e8-ebedcaf74392\") " Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.832080 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2hm5\" (UniqueName: \"kubernetes.io/projected/2ed74970-560a-4f45-84e8-ebedcaf74392-kube-api-access-t2hm5\") pod \"2ed74970-560a-4f45-84e8-ebedcaf74392\" (UID: \"2ed74970-560a-4f45-84e8-ebedcaf74392\") " Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.832615 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ed74970-560a-4f45-84e8-ebedcaf74392-config-volume" (OuterVolumeSpecName: "config-volume") pod "2ed74970-560a-4f45-84e8-ebedcaf74392" (UID: "2ed74970-560a-4f45-84e8-ebedcaf74392"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.833594 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.833814 4752 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ed74970-560a-4f45-84e8-ebedcaf74392-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 10:26:59 crc kubenswrapper[4752]: E0122 10:26:59.834159 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:27:00.334137917 +0000 UTC m=+99.564080825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.838105 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ed74970-560a-4f45-84e8-ebedcaf74392-kube-api-access-t2hm5" (OuterVolumeSpecName: "kube-api-access-t2hm5") pod "2ed74970-560a-4f45-84e8-ebedcaf74392" (UID: "2ed74970-560a-4f45-84e8-ebedcaf74392"). InnerVolumeSpecName "kube-api-access-t2hm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.838305 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ed74970-560a-4f45-84e8-ebedcaf74392-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2ed74970-560a-4f45-84e8-ebedcaf74392" (UID: "2ed74970-560a-4f45-84e8-ebedcaf74392"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.934545 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 22 10:26:59 crc kubenswrapper[4752]: E0122 10:26:59.934832 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ed74970-560a-4f45-84e8-ebedcaf74392" containerName="collect-profiles" Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.934874 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ed74970-560a-4f45-84e8-ebedcaf74392" containerName="collect-profiles" Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.935040 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.935055 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ed74970-560a-4f45-84e8-ebedcaf74392" containerName="collect-profiles" Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.935290 4752 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2ed74970-560a-4f45-84e8-ebedcaf74392-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.935303 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2hm5\" (UniqueName: \"kubernetes.io/projected/2ed74970-560a-4f45-84e8-ebedcaf74392-kube-api-access-t2hm5\") on node \"crc\" DevicePath \"\"" Jan 22 10:26:59 crc kubenswrapper[4752]: E0122 10:26:59.935364 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:27:00.435349319 +0000 UTC m=+99.665292227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.935612 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.943681 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.943901 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 22 10:26:59 crc kubenswrapper[4752]: I0122 10:26:59.944579 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.036211 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd9d1edf-48d9-484e-b5b3-4a9d8e9e64e7-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"dd9d1edf-48d9-484e-b5b3-4a9d8e9e64e7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.036352 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd9d1edf-48d9-484e-b5b3-4a9d8e9e64e7-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"dd9d1edf-48d9-484e-b5b3-4a9d8e9e64e7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.036406 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:27:00 crc kubenswrapper[4752]: E0122 10:27:00.036794 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:27:00.536779837 +0000 UTC m=+99.766722825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.062471 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j4jbd"] Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.063953 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4jbd" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.065942 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.073064 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4jbd"] Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.137447 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:27:00 crc kubenswrapper[4752]: E0122 10:27:00.137678 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:27:00.63763736 +0000 UTC m=+99.867580268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.137832 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd9d1edf-48d9-484e-b5b3-4a9d8e9e64e7-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"dd9d1edf-48d9-484e-b5b3-4a9d8e9e64e7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.137965 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd9d1edf-48d9-484e-b5b3-4a9d8e9e64e7-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"dd9d1edf-48d9-484e-b5b3-4a9d8e9e64e7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.138033 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.138127 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd9d1edf-48d9-484e-b5b3-4a9d8e9e64e7-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"dd9d1edf-48d9-484e-b5b3-4a9d8e9e64e7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 10:27:00 crc kubenswrapper[4752]: E0122 10:27:00.138512 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:27:00.638489242 +0000 UTC m=+99.868432160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.161830 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd9d1edf-48d9-484e-b5b3-4a9d8e9e64e7-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"dd9d1edf-48d9-484e-b5b3-4a9d8e9e64e7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.239069 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:27:00 crc kubenswrapper[4752]: E0122 10:27:00.239348 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 10:27:00.739299014 +0000 UTC m=+99.969241932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.239274 4752 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-22T10:26:59.278632356Z","Handler":null,"Name":""} Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.239703 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh52l\" (UniqueName: \"kubernetes.io/projected/0386a68f-2339-4ef6-8d96-e518b0682b4a-kube-api-access-mh52l\") pod \"redhat-marketplace-j4jbd\" (UID: \"0386a68f-2339-4ef6-8d96-e518b0682b4a\") " pod="openshift-marketplace/redhat-marketplace-j4jbd" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.239990 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0386a68f-2339-4ef6-8d96-e518b0682b4a-utilities\") pod \"redhat-marketplace-j4jbd\" (UID: \"0386a68f-2339-4ef6-8d96-e518b0682b4a\") " pod="openshift-marketplace/redhat-marketplace-j4jbd" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.240167 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.240271 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0386a68f-2339-4ef6-8d96-e518b0682b4a-catalog-content\") pod \"redhat-marketplace-j4jbd\" (UID: \"0386a68f-2339-4ef6-8d96-e518b0682b4a\") " pod="openshift-marketplace/redhat-marketplace-j4jbd" Jan 22 10:27:00 crc kubenswrapper[4752]: E0122 10:27:00.240514 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 10:27:00.740494745 +0000 UTC m=+99.970437673 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xn8dz" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.242674 4752 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.242762 4752 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.251067 4752 generic.go:334] "Generic (PLEG): container finished" podID="0d035039-51b4-41aa-9e33-db0a1ca24332" containerID="b13532d54c31a5b054a1343ac712ca0cb6240d5f613442015685e2414c5640a5" exitCode=0 Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.251100 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4r8gj" event={"ID":"0d035039-51b4-41aa-9e33-db0a1ca24332","Type":"ContainerDied","Data":"b13532d54c31a5b054a1343ac712ca0cb6240d5f613442015685e2414c5640a5"} Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.251129 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4r8gj" event={"ID":"0d035039-51b4-41aa-9e33-db0a1ca24332","Type":"ContainerStarted","Data":"7fc145b0f3a6b9e1ea9ca0e6f4b916ea75905dc7316279fb15663dbfb76b5162"} Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.252582 4752 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.254477 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g5js4" event={"ID":"48336b51-2e64-4f1e-a96f-5f866900ba2a","Type":"ContainerStarted","Data":"d9f7c1856f8a78de93c02aa4e694487558ed59165190bd5508e803f175f2d11f"} Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.254509 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g5js4" event={"ID":"48336b51-2e64-4f1e-a96f-5f866900ba2a","Type":"ContainerStarted","Data":"27025d7d844c9b488970248b587cd0d620553b966b9ee73c4e1a6252afc06088"} Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.257078 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-wmzmg" event={"ID":"2ed74970-560a-4f45-84e8-ebedcaf74392","Type":"ContainerDied","Data":"86cff707c958ecdc71daacf4539412b109bce063d26f757cd8a1af90b6372b56"} Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.257105 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86cff707c958ecdc71daacf4539412b109bce063d26f757cd8a1af90b6372b56" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.257146 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484615-wmzmg" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.265201 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.270098 4752 generic.go:334] "Generic (PLEG): container finished" podID="71357b8c-126d-4119-943e-653febd0612d" containerID="1ee51ca0faf1552304cb6fd988a449d37852204f887a0a1af2cba0398e1ee87d" exitCode=0 Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.270130 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x72kv" event={"ID":"71357b8c-126d-4119-943e-653febd0612d","Type":"ContainerDied","Data":"1ee51ca0faf1552304cb6fd988a449d37852204f887a0a1af2cba0398e1ee87d"} Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.270167 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x72kv" event={"ID":"71357b8c-126d-4119-943e-653febd0612d","Type":"ContainerStarted","Data":"51ceae9984997bc02fe267c283d7e785a03733b91c1976afd2af77472d8af0ea"} Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.273318 4752 generic.go:334] "Generic (PLEG): container finished" podID="eb0789a0-f347-4a6f-ba09-cd7cb558d5b5" containerID="2e4df822c558e9e480ae076f0c369ec310014cb14c797433a472d24866b348b2" exitCode=0 Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.273393 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x65n2" event={"ID":"eb0789a0-f347-4a6f-ba09-cd7cb558d5b5","Type":"ContainerDied","Data":"2e4df822c558e9e480ae076f0c369ec310014cb14c797433a472d24866b348b2"} Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.273423 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x65n2" event={"ID":"eb0789a0-f347-4a6f-ba09-cd7cb558d5b5","Type":"ContainerStarted","Data":"730773af0c3413a4408760b47b559e86b1d28493aed5d97332093fe116dbfb8b"} Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.275988 4752 generic.go:334] "Generic (PLEG): container finished" podID="d7873b3a-ffed-4c96-818c-90117b142098" containerID="df2c4e057d281943bd64261690ad5ca07f35cf5631e8de0003bb27c813ef6647" exitCode=0 Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.276532 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z4qst" event={"ID":"d7873b3a-ffed-4c96-818c-90117b142098","Type":"ContainerDied","Data":"df2c4e057d281943bd64261690ad5ca07f35cf5631e8de0003bb27c813ef6647"} Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.276581 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z4qst" event={"ID":"d7873b3a-ffed-4c96-818c-90117b142098","Type":"ContainerStarted","Data":"0d7212c67dc318051af19061f5ee49d7657da362294f7144ef5ba650b2530fce"} Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.289610 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-g5js4" podStartSLOduration=11.289585426 podStartE2EDuration="11.289585426s" podCreationTimestamp="2026-01-22 10:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:27:00.285681444 +0000 UTC m=+99.515624372" watchObservedRunningTime="2026-01-22 10:27:00.289585426 +0000 UTC m=+99.519528354" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.347797 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.348019 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0386a68f-2339-4ef6-8d96-e518b0682b4a-catalog-content\") pod \"redhat-marketplace-j4jbd\" (UID: \"0386a68f-2339-4ef6-8d96-e518b0682b4a\") " pod="openshift-marketplace/redhat-marketplace-j4jbd" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.348076 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh52l\" (UniqueName: \"kubernetes.io/projected/0386a68f-2339-4ef6-8d96-e518b0682b4a-kube-api-access-mh52l\") pod \"redhat-marketplace-j4jbd\" (UID: \"0386a68f-2339-4ef6-8d96-e518b0682b4a\") " pod="openshift-marketplace/redhat-marketplace-j4jbd" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.348129 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0386a68f-2339-4ef6-8d96-e518b0682b4a-utilities\") pod \"redhat-marketplace-j4jbd\" (UID: \"0386a68f-2339-4ef6-8d96-e518b0682b4a\") " pod="openshift-marketplace/redhat-marketplace-j4jbd" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.348899 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0386a68f-2339-4ef6-8d96-e518b0682b4a-utilities\") pod \"redhat-marketplace-j4jbd\" (UID: \"0386a68f-2339-4ef6-8d96-e518b0682b4a\") " pod="openshift-marketplace/redhat-marketplace-j4jbd" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.349132 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0386a68f-2339-4ef6-8d96-e518b0682b4a-catalog-content\") pod \"redhat-marketplace-j4jbd\" (UID: \"0386a68f-2339-4ef6-8d96-e518b0682b4a\") " pod="openshift-marketplace/redhat-marketplace-j4jbd" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.365812 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.396947 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh52l\" (UniqueName: \"kubernetes.io/projected/0386a68f-2339-4ef6-8d96-e518b0682b4a-kube-api-access-mh52l\") pod \"redhat-marketplace-j4jbd\" (UID: \"0386a68f-2339-4ef6-8d96-e518b0682b4a\") " pod="openshift-marketplace/redhat-marketplace-j4jbd" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.453141 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.457379 4752 patch_prober.go:28] interesting pod/router-default-5444994796-ldnv8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 10:27:00 crc kubenswrapper[4752]: [-]has-synced failed: reason withheld Jan 22 10:27:00 crc kubenswrapper[4752]: [+]process-running ok Jan 22 10:27:00 crc kubenswrapper[4752]: healthz check failed Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.457521 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ldnv8" podUID="124095a6-83e3-45da-8bed-6ed5a8f6892b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.458946 4752 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.458979 4752 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.468643 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dblhf"] Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.471678 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dblhf" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.480285 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dblhf"] Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.508966 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.513011 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xn8dz\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.635083 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.635726 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.642434 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.642617 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.646201 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.658454 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41faa779-87e9-41e1-a547-feba13612d57-utilities\") pod \"redhat-marketplace-dblhf\" (UID: \"41faa779-87e9-41e1-a547-feba13612d57\") " pod="openshift-marketplace/redhat-marketplace-dblhf" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.658529 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2w2s\" (UniqueName: \"kubernetes.io/projected/41faa779-87e9-41e1-a547-feba13612d57-kube-api-access-d2w2s\") pod \"redhat-marketplace-dblhf\" (UID: \"41faa779-87e9-41e1-a547-feba13612d57\") " pod="openshift-marketplace/redhat-marketplace-dblhf" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.658805 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41faa779-87e9-41e1-a547-feba13612d57-catalog-content\") pod \"redhat-marketplace-dblhf\" (UID: \"41faa779-87e9-41e1-a547-feba13612d57\") " pod="openshift-marketplace/redhat-marketplace-dblhf" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.679765 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4jbd" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.760472 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41faa779-87e9-41e1-a547-feba13612d57-catalog-content\") pod \"redhat-marketplace-dblhf\" (UID: \"41faa779-87e9-41e1-a547-feba13612d57\") " pod="openshift-marketplace/redhat-marketplace-dblhf" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.760985 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d35d055a-2c63-4278-af0f-085300d6ff1d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d35d055a-2c63-4278-af0f-085300d6ff1d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.761066 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d35d055a-2c63-4278-af0f-085300d6ff1d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d35d055a-2c63-4278-af0f-085300d6ff1d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.761169 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41faa779-87e9-41e1-a547-feba13612d57-catalog-content\") pod \"redhat-marketplace-dblhf\" (UID: \"41faa779-87e9-41e1-a547-feba13612d57\") " pod="openshift-marketplace/redhat-marketplace-dblhf" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.761172 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41faa779-87e9-41e1-a547-feba13612d57-utilities\") pod \"redhat-marketplace-dblhf\" (UID: \"41faa779-87e9-41e1-a547-feba13612d57\") " pod="openshift-marketplace/redhat-marketplace-dblhf" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.761270 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2w2s\" (UniqueName: \"kubernetes.io/projected/41faa779-87e9-41e1-a547-feba13612d57-kube-api-access-d2w2s\") pod \"redhat-marketplace-dblhf\" (UID: \"41faa779-87e9-41e1-a547-feba13612d57\") " pod="openshift-marketplace/redhat-marketplace-dblhf" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.761459 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41faa779-87e9-41e1-a547-feba13612d57-utilities\") pod \"redhat-marketplace-dblhf\" (UID: \"41faa779-87e9-41e1-a547-feba13612d57\") " pod="openshift-marketplace/redhat-marketplace-dblhf" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.770138 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.781607 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2w2s\" (UniqueName: \"kubernetes.io/projected/41faa779-87e9-41e1-a547-feba13612d57-kube-api-access-d2w2s\") pod \"redhat-marketplace-dblhf\" (UID: \"41faa779-87e9-41e1-a547-feba13612d57\") " pod="openshift-marketplace/redhat-marketplace-dblhf" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.792299 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.797278 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-m8c4r" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.800489 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dblhf" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.875670 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d35d055a-2c63-4278-af0f-085300d6ff1d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d35d055a-2c63-4278-af0f-085300d6ff1d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.876088 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d35d055a-2c63-4278-af0f-085300d6ff1d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d35d055a-2c63-4278-af0f-085300d6ff1d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.876533 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d35d055a-2c63-4278-af0f-085300d6ff1d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d35d055a-2c63-4278-af0f-085300d6ff1d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.927439 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d35d055a-2c63-4278-af0f-085300d6ff1d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d35d055a-2c63-4278-af0f-085300d6ff1d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.963178 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 10:27:00 crc kubenswrapper[4752]: I0122 10:27:00.976801 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4jbd"] Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.055027 4752 patch_prober.go:28] interesting pod/downloads-7954f5f757-bmh84 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.055427 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bmh84" podUID="5e213e66-9429-41a1-9b53-476794092c7f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.055105 4752 patch_prober.go:28] interesting pod/downloads-7954f5f757-bmh84 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.055651 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bmh84" podUID="5e213e66-9429-41a1-9b53-476794092c7f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.118701 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.275221 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dblhf"] Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.280545 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xn8dz"] Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.285934 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"dd9d1edf-48d9-484e-b5b3-4a9d8e9e64e7","Type":"ContainerStarted","Data":"c67230428a52352a82acf41e933cb8ef57a2f558fd2bac2875ff5f9ca6836ec9"} Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.285980 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"dd9d1edf-48d9-484e-b5b3-4a9d8e9e64e7","Type":"ContainerStarted","Data":"3694c3019b0c30af1f676505bbc6422575898e0ea27fb703449ea26d1365bbe7"} Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.288933 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4jbd" event={"ID":"0386a68f-2339-4ef6-8d96-e518b0682b4a","Type":"ContainerStarted","Data":"9d8df439680c8c9761b501b3a9768377966afc10b11cb698c7046a604de2d22c"} Jan 22 10:27:01 crc kubenswrapper[4752]: W0122 10:27:01.293052 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4140f15a_5e23_431b_ad69_a64d54325d19.slice/crio-525156f70e1202a8970960c98238a12e0f86e1762310a9a710448f0c18d66168 WatchSource:0}: Error finding container 525156f70e1202a8970960c98238a12e0f86e1762310a9a710448f0c18d66168: Status 404 returned error can't find the container with id 525156f70e1202a8970960c98238a12e0f86e1762310a9a710448f0c18d66168 Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.306291 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.306272766 podStartE2EDuration="2.306272766s" podCreationTimestamp="2026-01-22 10:26:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:27:01.302373604 +0000 UTC m=+100.532316512" watchObservedRunningTime="2026-01-22 10:27:01.306272766 +0000 UTC m=+100.536215674" Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.458258 4752 patch_prober.go:28] interesting pod/router-default-5444994796-ldnv8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 10:27:01 crc kubenswrapper[4752]: [-]has-synced failed: reason withheld Jan 22 10:27:01 crc kubenswrapper[4752]: [+]process-running ok Jan 22 10:27:01 crc kubenswrapper[4752]: healthz check failed Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.458594 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ldnv8" podUID="124095a6-83e3-45da-8bed-6ed5a8f6892b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.518577 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lpphf"] Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.519610 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lpphf" Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.529626 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.529781 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.539867 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lpphf"] Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.591490 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfz4s\" (UniqueName: \"kubernetes.io/projected/3c7b73c6-cc59-400a-858e-85af0b88a5b9-kube-api-access-vfz4s\") pod \"redhat-operators-lpphf\" (UID: \"3c7b73c6-cc59-400a-858e-85af0b88a5b9\") " pod="openshift-marketplace/redhat-operators-lpphf" Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.591564 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c7b73c6-cc59-400a-858e-85af0b88a5b9-catalog-content\") pod \"redhat-operators-lpphf\" (UID: \"3c7b73c6-cc59-400a-858e-85af0b88a5b9\") " pod="openshift-marketplace/redhat-operators-lpphf" Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.591651 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c7b73c6-cc59-400a-858e-85af0b88a5b9-utilities\") pod \"redhat-operators-lpphf\" (UID: \"3c7b73c6-cc59-400a-858e-85af0b88a5b9\") " pod="openshift-marketplace/redhat-operators-lpphf" Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.694712 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c7b73c6-cc59-400a-858e-85af0b88a5b9-catalog-content\") pod \"redhat-operators-lpphf\" (UID: \"3c7b73c6-cc59-400a-858e-85af0b88a5b9\") " pod="openshift-marketplace/redhat-operators-lpphf" Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.695226 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c7b73c6-cc59-400a-858e-85af0b88a5b9-utilities\") pod \"redhat-operators-lpphf\" (UID: \"3c7b73c6-cc59-400a-858e-85af0b88a5b9\") " pod="openshift-marketplace/redhat-operators-lpphf" Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.695263 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfz4s\" (UniqueName: \"kubernetes.io/projected/3c7b73c6-cc59-400a-858e-85af0b88a5b9-kube-api-access-vfz4s\") pod \"redhat-operators-lpphf\" (UID: \"3c7b73c6-cc59-400a-858e-85af0b88a5b9\") " pod="openshift-marketplace/redhat-operators-lpphf" Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.695716 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c7b73c6-cc59-400a-858e-85af0b88a5b9-catalog-content\") pod \"redhat-operators-lpphf\" (UID: \"3c7b73c6-cc59-400a-858e-85af0b88a5b9\") " pod="openshift-marketplace/redhat-operators-lpphf" Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.695892 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c7b73c6-cc59-400a-858e-85af0b88a5b9-utilities\") pod \"redhat-operators-lpphf\" (UID: \"3c7b73c6-cc59-400a-858e-85af0b88a5b9\") " pod="openshift-marketplace/redhat-operators-lpphf" Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.735832 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfz4s\" (UniqueName: \"kubernetes.io/projected/3c7b73c6-cc59-400a-858e-85af0b88a5b9-kube-api-access-vfz4s\") pod \"redhat-operators-lpphf\" (UID: \"3c7b73c6-cc59-400a-858e-85af0b88a5b9\") " pod="openshift-marketplace/redhat-operators-lpphf" Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.772979 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.773076 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.778470 4752 patch_prober.go:28] interesting pod/console-f9d7485db-jh6kp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.778559 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-jh6kp" podUID="b82cc492-857e-4eaf-8e18-87e830bdc9f6" containerName="console" probeResult="failure" output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 22 10:27:01 crc kubenswrapper[4752]: E0122 10:27:01.805086 4752 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41faa779_87e9_41e1_a547_feba13612d57.slice/crio-conmon-d973aeaa0d015db93080dcd87e209bf37effaf5cfa39a2fd588f7a40586e5403.scope\": RecentStats: unable to find data in memory cache]" Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.870580 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pjvrm"] Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.875978 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pjvrm" Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.882527 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pjvrm"] Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.899111 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21c6ff12-b36a-4fdc-add4-027b31984b85-catalog-content\") pod \"redhat-operators-pjvrm\" (UID: \"21c6ff12-b36a-4fdc-add4-027b31984b85\") " pod="openshift-marketplace/redhat-operators-pjvrm" Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.899206 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21c6ff12-b36a-4fdc-add4-027b31984b85-utilities\") pod \"redhat-operators-pjvrm\" (UID: \"21c6ff12-b36a-4fdc-add4-027b31984b85\") " pod="openshift-marketplace/redhat-operators-pjvrm" Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.899267 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nkzs\" (UniqueName: \"kubernetes.io/projected/21c6ff12-b36a-4fdc-add4-027b31984b85-kube-api-access-9nkzs\") pod \"redhat-operators-pjvrm\" (UID: \"21c6ff12-b36a-4fdc-add4-027b31984b85\") " pod="openshift-marketplace/redhat-operators-pjvrm" Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.950150 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lpphf" Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.999823 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nkzs\" (UniqueName: \"kubernetes.io/projected/21c6ff12-b36a-4fdc-add4-027b31984b85-kube-api-access-9nkzs\") pod \"redhat-operators-pjvrm\" (UID: \"21c6ff12-b36a-4fdc-add4-027b31984b85\") " pod="openshift-marketplace/redhat-operators-pjvrm" Jan 22 10:27:01 crc kubenswrapper[4752]: I0122 10:27:01.999901 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21c6ff12-b36a-4fdc-add4-027b31984b85-catalog-content\") pod \"redhat-operators-pjvrm\" (UID: \"21c6ff12-b36a-4fdc-add4-027b31984b85\") " pod="openshift-marketplace/redhat-operators-pjvrm" Jan 22 10:27:02 crc kubenswrapper[4752]: I0122 10:27:01.999958 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21c6ff12-b36a-4fdc-add4-027b31984b85-utilities\") pod \"redhat-operators-pjvrm\" (UID: \"21c6ff12-b36a-4fdc-add4-027b31984b85\") " pod="openshift-marketplace/redhat-operators-pjvrm" Jan 22 10:27:02 crc kubenswrapper[4752]: I0122 10:27:02.000516 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21c6ff12-b36a-4fdc-add4-027b31984b85-utilities\") pod \"redhat-operators-pjvrm\" (UID: \"21c6ff12-b36a-4fdc-add4-027b31984b85\") " pod="openshift-marketplace/redhat-operators-pjvrm" Jan 22 10:27:02 crc kubenswrapper[4752]: I0122 10:27:02.000721 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21c6ff12-b36a-4fdc-add4-027b31984b85-catalog-content\") pod \"redhat-operators-pjvrm\" (UID: \"21c6ff12-b36a-4fdc-add4-027b31984b85\") " pod="openshift-marketplace/redhat-operators-pjvrm" Jan 22 10:27:02 crc kubenswrapper[4752]: I0122 10:27:02.073428 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nkzs\" (UniqueName: \"kubernetes.io/projected/21c6ff12-b36a-4fdc-add4-027b31984b85-kube-api-access-9nkzs\") pod \"redhat-operators-pjvrm\" (UID: \"21c6ff12-b36a-4fdc-add4-027b31984b85\") " pod="openshift-marketplace/redhat-operators-pjvrm" Jan 22 10:27:02 crc kubenswrapper[4752]: I0122 10:27:02.221018 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pjvrm" Jan 22 10:27:02 crc kubenswrapper[4752]: I0122 10:27:02.317445 4752 generic.go:334] "Generic (PLEG): container finished" podID="dd9d1edf-48d9-484e-b5b3-4a9d8e9e64e7" containerID="c67230428a52352a82acf41e933cb8ef57a2f558fd2bac2875ff5f9ca6836ec9" exitCode=0 Jan 22 10:27:02 crc kubenswrapper[4752]: I0122 10:27:02.317812 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"dd9d1edf-48d9-484e-b5b3-4a9d8e9e64e7","Type":"ContainerDied","Data":"c67230428a52352a82acf41e933cb8ef57a2f558fd2bac2875ff5f9ca6836ec9"} Jan 22 10:27:02 crc kubenswrapper[4752]: I0122 10:27:02.348108 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d35d055a-2c63-4278-af0f-085300d6ff1d","Type":"ContainerStarted","Data":"8fc61273340e24d0b46d0001bfe9436748bf44a7001b22d650866e44f0aafe1f"} Jan 22 10:27:02 crc kubenswrapper[4752]: I0122 10:27:02.366140 4752 generic.go:334] "Generic (PLEG): container finished" podID="41faa779-87e9-41e1-a547-feba13612d57" containerID="d973aeaa0d015db93080dcd87e209bf37effaf5cfa39a2fd588f7a40586e5403" exitCode=0 Jan 22 10:27:02 crc kubenswrapper[4752]: I0122 10:27:02.367831 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dblhf" event={"ID":"41faa779-87e9-41e1-a547-feba13612d57","Type":"ContainerDied","Data":"d973aeaa0d015db93080dcd87e209bf37effaf5cfa39a2fd588f7a40586e5403"} Jan 22 10:27:02 crc kubenswrapper[4752]: I0122 10:27:02.367867 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dblhf" event={"ID":"41faa779-87e9-41e1-a547-feba13612d57","Type":"ContainerStarted","Data":"4ac89ac67994d4946eb9d799b6e458133c9b6c710a8b8d67c30e9b06dda22e58"} Jan 22 10:27:02 crc kubenswrapper[4752]: I0122 10:27:02.390594 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" event={"ID":"4140f15a-5e23-431b-ad69-a64d54325d19","Type":"ContainerStarted","Data":"67941ce38fc724f60f199c654f4c5362ee74333ceb514e71434425b76eefe6eb"} Jan 22 10:27:02 crc kubenswrapper[4752]: I0122 10:27:02.390658 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" event={"ID":"4140f15a-5e23-431b-ad69-a64d54325d19","Type":"ContainerStarted","Data":"525156f70e1202a8970960c98238a12e0f86e1762310a9a710448f0c18d66168"} Jan 22 10:27:02 crc kubenswrapper[4752]: I0122 10:27:02.391123 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:27:02 crc kubenswrapper[4752]: I0122 10:27:02.402557 4752 generic.go:334] "Generic (PLEG): container finished" podID="0386a68f-2339-4ef6-8d96-e518b0682b4a" containerID="132a6a2a723ebce88cbfda905db99b3d5e1070bec2d43a90dffd50c52e4c5eda" exitCode=0 Jan 22 10:27:02 crc kubenswrapper[4752]: I0122 10:27:02.402996 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4jbd" event={"ID":"0386a68f-2339-4ef6-8d96-e518b0682b4a","Type":"ContainerDied","Data":"132a6a2a723ebce88cbfda905db99b3d5e1070bec2d43a90dffd50c52e4c5eda"} Jan 22 10:27:02 crc kubenswrapper[4752]: I0122 10:27:02.424450 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" podStartSLOduration=76.424419515 podStartE2EDuration="1m16.424419515s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:27:02.424322602 +0000 UTC m=+101.654265510" watchObservedRunningTime="2026-01-22 10:27:02.424419515 +0000 UTC m=+101.654362423" Jan 22 10:27:02 crc kubenswrapper[4752]: I0122 10:27:02.458083 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-ldnv8" Jan 22 10:27:02 crc kubenswrapper[4752]: I0122 10:27:02.475135 4752 patch_prober.go:28] interesting pod/router-default-5444994796-ldnv8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 10:27:02 crc kubenswrapper[4752]: [-]has-synced failed: reason withheld Jan 22 10:27:02 crc kubenswrapper[4752]: [+]process-running ok Jan 22 10:27:02 crc kubenswrapper[4752]: healthz check failed Jan 22 10:27:02 crc kubenswrapper[4752]: I0122 10:27:02.475327 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ldnv8" podUID="124095a6-83e3-45da-8bed-6ed5a8f6892b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 10:27:02 crc kubenswrapper[4752]: I0122 10:27:02.702984 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pjvrm"] Jan 22 10:27:02 crc kubenswrapper[4752]: I0122 10:27:02.738932 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lpphf"] Jan 22 10:27:02 crc kubenswrapper[4752]: W0122 10:27:02.758204 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21c6ff12_b36a_4fdc_add4_027b31984b85.slice/crio-edb174c28c25cd4b1d557471012e8664a13eac171ea51f86664ff3d312ece8a4 WatchSource:0}: Error finding container edb174c28c25cd4b1d557471012e8664a13eac171ea51f86664ff3d312ece8a4: Status 404 returned error can't find the container with id edb174c28c25cd4b1d557471012e8664a13eac171ea51f86664ff3d312ece8a4 Jan 22 10:27:03 crc kubenswrapper[4752]: I0122 10:27:03.426765 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjvrm" event={"ID":"21c6ff12-b36a-4fdc-add4-027b31984b85","Type":"ContainerStarted","Data":"edb174c28c25cd4b1d557471012e8664a13eac171ea51f86664ff3d312ece8a4"} Jan 22 10:27:03 crc kubenswrapper[4752]: I0122 10:27:03.429993 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpphf" event={"ID":"3c7b73c6-cc59-400a-858e-85af0b88a5b9","Type":"ContainerStarted","Data":"4aba645354fed7326071594b01ef8f5340f92c821f3add9b71b3598beed47ebf"} Jan 22 10:27:03 crc kubenswrapper[4752]: I0122 10:27:03.434922 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d35d055a-2c63-4278-af0f-085300d6ff1d","Type":"ContainerStarted","Data":"036a84220fa50d203f0fbf6ed2e881159a91c860746c967ee0e5492bdb3d0037"} Jan 22 10:27:03 crc kubenswrapper[4752]: I0122 10:27:03.451993 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.451974909 podStartE2EDuration="3.451974909s" podCreationTimestamp="2026-01-22 10:27:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:27:03.450980653 +0000 UTC m=+102.680923561" watchObservedRunningTime="2026-01-22 10:27:03.451974909 +0000 UTC m=+102.681917817" Jan 22 10:27:03 crc kubenswrapper[4752]: I0122 10:27:03.461371 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-ldnv8" Jan 22 10:27:03 crc kubenswrapper[4752]: I0122 10:27:03.464768 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-ldnv8" Jan 22 10:27:03 crc kubenswrapper[4752]: I0122 10:27:03.735076 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 10:27:03 crc kubenswrapper[4752]: I0122 10:27:03.852130 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd9d1edf-48d9-484e-b5b3-4a9d8e9e64e7-kubelet-dir\") pod \"dd9d1edf-48d9-484e-b5b3-4a9d8e9e64e7\" (UID: \"dd9d1edf-48d9-484e-b5b3-4a9d8e9e64e7\") " Jan 22 10:27:03 crc kubenswrapper[4752]: I0122 10:27:03.852215 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd9d1edf-48d9-484e-b5b3-4a9d8e9e64e7-kube-api-access\") pod \"dd9d1edf-48d9-484e-b5b3-4a9d8e9e64e7\" (UID: \"dd9d1edf-48d9-484e-b5b3-4a9d8e9e64e7\") " Jan 22 10:27:03 crc kubenswrapper[4752]: I0122 10:27:03.852902 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd9d1edf-48d9-484e-b5b3-4a9d8e9e64e7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "dd9d1edf-48d9-484e-b5b3-4a9d8e9e64e7" (UID: "dd9d1edf-48d9-484e-b5b3-4a9d8e9e64e7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:27:03 crc kubenswrapper[4752]: I0122 10:27:03.874331 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd9d1edf-48d9-484e-b5b3-4a9d8e9e64e7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "dd9d1edf-48d9-484e-b5b3-4a9d8e9e64e7" (UID: "dd9d1edf-48d9-484e-b5b3-4a9d8e9e64e7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:27:03 crc kubenswrapper[4752]: I0122 10:27:03.953686 4752 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd9d1edf-48d9-484e-b5b3-4a9d8e9e64e7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 10:27:03 crc kubenswrapper[4752]: I0122 10:27:03.953718 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd9d1edf-48d9-484e-b5b3-4a9d8e9e64e7-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 10:27:04 crc kubenswrapper[4752]: I0122 10:27:04.132678 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:27:04 crc kubenswrapper[4752]: I0122 10:27:04.448830 4752 generic.go:334] "Generic (PLEG): container finished" podID="21c6ff12-b36a-4fdc-add4-027b31984b85" containerID="3dabd32dbf38b5d732e7fa393782acdaff5b346d8e7f4d1936d8959898856eb3" exitCode=0 Jan 22 10:27:04 crc kubenswrapper[4752]: I0122 10:27:04.449522 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjvrm" event={"ID":"21c6ff12-b36a-4fdc-add4-027b31984b85","Type":"ContainerDied","Data":"3dabd32dbf38b5d732e7fa393782acdaff5b346d8e7f4d1936d8959898856eb3"} Jan 22 10:27:04 crc kubenswrapper[4752]: I0122 10:27:04.456571 4752 generic.go:334] "Generic (PLEG): container finished" podID="3c7b73c6-cc59-400a-858e-85af0b88a5b9" containerID="459027283f874c289123e268d36137c023d75ae92459ceefc46b75caf690324f" exitCode=0 Jan 22 10:27:04 crc kubenswrapper[4752]: I0122 10:27:04.456657 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpphf" event={"ID":"3c7b73c6-cc59-400a-858e-85af0b88a5b9","Type":"ContainerDied","Data":"459027283f874c289123e268d36137c023d75ae92459ceefc46b75caf690324f"} Jan 22 10:27:04 crc kubenswrapper[4752]: I0122 10:27:04.458357 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"dd9d1edf-48d9-484e-b5b3-4a9d8e9e64e7","Type":"ContainerDied","Data":"3694c3019b0c30af1f676505bbc6422575898e0ea27fb703449ea26d1365bbe7"} Jan 22 10:27:04 crc kubenswrapper[4752]: I0122 10:27:04.458429 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 10:27:04 crc kubenswrapper[4752]: I0122 10:27:04.458441 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3694c3019b0c30af1f676505bbc6422575898e0ea27fb703449ea26d1365bbe7" Jan 22 10:27:04 crc kubenswrapper[4752]: I0122 10:27:04.490103 4752 generic.go:334] "Generic (PLEG): container finished" podID="d35d055a-2c63-4278-af0f-085300d6ff1d" containerID="036a84220fa50d203f0fbf6ed2e881159a91c860746c967ee0e5492bdb3d0037" exitCode=0 Jan 22 10:27:04 crc kubenswrapper[4752]: I0122 10:27:04.490898 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d35d055a-2c63-4278-af0f-085300d6ff1d","Type":"ContainerDied","Data":"036a84220fa50d203f0fbf6ed2e881159a91c860746c967ee0e5492bdb3d0037"} Jan 22 10:27:04 crc kubenswrapper[4752]: I0122 10:27:04.873096 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6bbb033b-8d31-4200-b77f-4910b5170085-metrics-certs\") pod \"network-metrics-daemon-69crw\" (UID: \"6bbb033b-8d31-4200-b77f-4910b5170085\") " pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:27:04 crc kubenswrapper[4752]: I0122 10:27:04.890555 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6bbb033b-8d31-4200-b77f-4910b5170085-metrics-certs\") pod \"network-metrics-daemon-69crw\" (UID: \"6bbb033b-8d31-4200-b77f-4910b5170085\") " pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:27:05 crc kubenswrapper[4752]: I0122 10:27:05.114418 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-69crw" Jan 22 10:27:05 crc kubenswrapper[4752]: I0122 10:27:05.364576 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-69crw"] Jan 22 10:27:05 crc kubenswrapper[4752]: I0122 10:27:05.501174 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-69crw" event={"ID":"6bbb033b-8d31-4200-b77f-4910b5170085","Type":"ContainerStarted","Data":"f4926c49233c137803159db7037b2dfe41eb2ebf58c3d98bd27d6084b78b7a18"} Jan 22 10:27:05 crc kubenswrapper[4752]: I0122 10:27:05.751077 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 10:27:05 crc kubenswrapper[4752]: I0122 10:27:05.899340 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d35d055a-2c63-4278-af0f-085300d6ff1d-kube-api-access\") pod \"d35d055a-2c63-4278-af0f-085300d6ff1d\" (UID: \"d35d055a-2c63-4278-af0f-085300d6ff1d\") " Jan 22 10:27:05 crc kubenswrapper[4752]: I0122 10:27:05.899401 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d35d055a-2c63-4278-af0f-085300d6ff1d-kubelet-dir\") pod \"d35d055a-2c63-4278-af0f-085300d6ff1d\" (UID: \"d35d055a-2c63-4278-af0f-085300d6ff1d\") " Jan 22 10:27:05 crc kubenswrapper[4752]: I0122 10:27:05.899674 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d35d055a-2c63-4278-af0f-085300d6ff1d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d35d055a-2c63-4278-af0f-085300d6ff1d" (UID: "d35d055a-2c63-4278-af0f-085300d6ff1d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:27:05 crc kubenswrapper[4752]: I0122 10:27:05.916042 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d35d055a-2c63-4278-af0f-085300d6ff1d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d35d055a-2c63-4278-af0f-085300d6ff1d" (UID: "d35d055a-2c63-4278-af0f-085300d6ff1d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:27:06 crc kubenswrapper[4752]: I0122 10:27:06.002007 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d35d055a-2c63-4278-af0f-085300d6ff1d-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 10:27:06 crc kubenswrapper[4752]: I0122 10:27:06.002055 4752 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d35d055a-2c63-4278-af0f-085300d6ff1d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 10:27:06 crc kubenswrapper[4752]: I0122 10:27:06.514219 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 10:27:06 crc kubenswrapper[4752]: I0122 10:27:06.514224 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d35d055a-2c63-4278-af0f-085300d6ff1d","Type":"ContainerDied","Data":"8fc61273340e24d0b46d0001bfe9436748bf44a7001b22d650866e44f0aafe1f"} Jan 22 10:27:06 crc kubenswrapper[4752]: I0122 10:27:06.514266 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fc61273340e24d0b46d0001bfe9436748bf44a7001b22d650866e44f0aafe1f" Jan 22 10:27:06 crc kubenswrapper[4752]: I0122 10:27:06.517503 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-69crw" event={"ID":"6bbb033b-8d31-4200-b77f-4910b5170085","Type":"ContainerStarted","Data":"0980a2a46958c1cc51199698cb2ae9a3b4f1b3441d776276ec943f456dc61d99"} Jan 22 10:27:07 crc kubenswrapper[4752]: I0122 10:27:07.321141 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-tbzz8" Jan 22 10:27:08 crc kubenswrapper[4752]: I0122 10:27:08.533041 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-5t8fw_1e312683-699a-4ea1-9914-d9dc8b237cb4/cluster-samples-operator/0.log" Jan 22 10:27:08 crc kubenswrapper[4752]: I0122 10:27:08.533319 4752 generic.go:334] "Generic (PLEG): container finished" podID="1e312683-699a-4ea1-9914-d9dc8b237cb4" containerID="6c6a37f5cc92119c8596dd38bdedaa97124f51ed139456a570c4ae1c1de8f9b3" exitCode=2 Jan 22 10:27:08 crc kubenswrapper[4752]: I0122 10:27:08.533399 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5t8fw" event={"ID":"1e312683-699a-4ea1-9914-d9dc8b237cb4","Type":"ContainerDied","Data":"6c6a37f5cc92119c8596dd38bdedaa97124f51ed139456a570c4ae1c1de8f9b3"} Jan 22 10:27:08 crc kubenswrapper[4752]: I0122 10:27:08.534133 4752 scope.go:117] "RemoveContainer" containerID="6c6a37f5cc92119c8596dd38bdedaa97124f51ed139456a570c4ae1c1de8f9b3" Jan 22 10:27:08 crc kubenswrapper[4752]: I0122 10:27:08.540293 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-69crw" event={"ID":"6bbb033b-8d31-4200-b77f-4910b5170085","Type":"ContainerStarted","Data":"382b3ec2ea4fd2868399f68e5661f500dfcf8db8efc79ab2d4cf8487f9aa4d65"} Jan 22 10:27:08 crc kubenswrapper[4752]: I0122 10:27:08.568778 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-69crw" podStartSLOduration=82.568749629 podStartE2EDuration="1m22.568749629s" podCreationTimestamp="2026-01-22 10:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:27:08.562182607 +0000 UTC m=+107.792125555" watchObservedRunningTime="2026-01-22 10:27:08.568749629 +0000 UTC m=+107.798692537" Jan 22 10:27:10 crc kubenswrapper[4752]: I0122 10:27:10.552669 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-5t8fw_1e312683-699a-4ea1-9914-d9dc8b237cb4/cluster-samples-operator/0.log" Jan 22 10:27:10 crc kubenswrapper[4752]: I0122 10:27:10.553012 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-5t8fw" event={"ID":"1e312683-699a-4ea1-9914-d9dc8b237cb4","Type":"ContainerStarted","Data":"07bd10e8a828646f36aa150fe4a77b4e275110d1d2894a2ea0c7e4dcc39f8ea9"} Jan 22 10:27:11 crc kubenswrapper[4752]: I0122 10:27:11.051885 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-bmh84" Jan 22 10:27:11 crc kubenswrapper[4752]: I0122 10:27:11.771358 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:27:11 crc kubenswrapper[4752]: I0122 10:27:11.775417 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:27:20 crc kubenswrapper[4752]: I0122 10:27:20.780955 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:27:32 crc kubenswrapper[4752]: I0122 10:27:32.495624 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hrggg" Jan 22 10:27:36 crc kubenswrapper[4752]: E0122 10:27:36.611636 4752 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 22 10:27:36 crc kubenswrapper[4752]: E0122 10:27:36.612153 4752 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kttwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-4r8gj_openshift-marketplace(0d035039-51b4-41aa-9e33-db0a1ca24332): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 10:27:36 crc kubenswrapper[4752]: E0122 10:27:36.613323 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-4r8gj" podUID="0d035039-51b4-41aa-9e33-db0a1ca24332" Jan 22 10:27:36 crc kubenswrapper[4752]: E0122 10:27:36.727106 4752 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 22 10:27:36 crc kubenswrapper[4752]: E0122 10:27:36.727413 4752 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ct8vw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-z4qst_openshift-marketplace(d7873b3a-ffed-4c96-818c-90117b142098): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 10:27:36 crc kubenswrapper[4752]: E0122 10:27:36.728927 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-z4qst" podUID="d7873b3a-ffed-4c96-818c-90117b142098" Jan 22 10:27:36 crc kubenswrapper[4752]: E0122 10:27:36.764348 4752 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 22 10:27:36 crc kubenswrapper[4752]: E0122 10:27:36.764607 4752 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g8mp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-x65n2_openshift-marketplace(eb0789a0-f347-4a6f-ba09-cd7cb558d5b5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 10:27:36 crc kubenswrapper[4752]: E0122 10:27:36.767334 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-x65n2" podUID="eb0789a0-f347-4a6f-ba09-cd7cb558d5b5" Jan 22 10:27:39 crc kubenswrapper[4752]: I0122 10:27:39.737535 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 22 10:27:39 crc kubenswrapper[4752]: E0122 10:27:39.744446 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d35d055a-2c63-4278-af0f-085300d6ff1d" containerName="pruner" Jan 22 10:27:39 crc kubenswrapper[4752]: I0122 10:27:39.744481 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="d35d055a-2c63-4278-af0f-085300d6ff1d" containerName="pruner" Jan 22 10:27:39 crc kubenswrapper[4752]: E0122 10:27:39.744508 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd9d1edf-48d9-484e-b5b3-4a9d8e9e64e7" containerName="pruner" Jan 22 10:27:39 crc kubenswrapper[4752]: I0122 10:27:39.744518 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd9d1edf-48d9-484e-b5b3-4a9d8e9e64e7" containerName="pruner" Jan 22 10:27:39 crc kubenswrapper[4752]: I0122 10:27:39.744710 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd9d1edf-48d9-484e-b5b3-4a9d8e9e64e7" containerName="pruner" Jan 22 10:27:39 crc kubenswrapper[4752]: I0122 10:27:39.744725 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="d35d055a-2c63-4278-af0f-085300d6ff1d" containerName="pruner" Jan 22 10:27:39 crc kubenswrapper[4752]: I0122 10:27:39.745114 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 22 10:27:39 crc kubenswrapper[4752]: I0122 10:27:39.745209 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 10:27:39 crc kubenswrapper[4752]: I0122 10:27:39.753240 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be4cd4a6-0f36-4347-8cde-c9b70de3a5e2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"be4cd4a6-0f36-4347-8cde-c9b70de3a5e2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 10:27:39 crc kubenswrapper[4752]: I0122 10:27:39.753518 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 22 10:27:39 crc kubenswrapper[4752]: I0122 10:27:39.753690 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be4cd4a6-0f36-4347-8cde-c9b70de3a5e2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"be4cd4a6-0f36-4347-8cde-c9b70de3a5e2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 10:27:39 crc kubenswrapper[4752]: I0122 10:27:39.753743 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 22 10:27:39 crc kubenswrapper[4752]: I0122 10:27:39.855926 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be4cd4a6-0f36-4347-8cde-c9b70de3a5e2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"be4cd4a6-0f36-4347-8cde-c9b70de3a5e2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 10:27:39 crc kubenswrapper[4752]: I0122 10:27:39.856148 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be4cd4a6-0f36-4347-8cde-c9b70de3a5e2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"be4cd4a6-0f36-4347-8cde-c9b70de3a5e2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 10:27:39 crc kubenswrapper[4752]: I0122 10:27:39.856317 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be4cd4a6-0f36-4347-8cde-c9b70de3a5e2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"be4cd4a6-0f36-4347-8cde-c9b70de3a5e2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 10:27:39 crc kubenswrapper[4752]: I0122 10:27:39.878237 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be4cd4a6-0f36-4347-8cde-c9b70de3a5e2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"be4cd4a6-0f36-4347-8cde-c9b70de3a5e2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 10:27:40 crc kubenswrapper[4752]: I0122 10:27:40.114043 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 10:27:43 crc kubenswrapper[4752]: E0122 10:27:43.640293 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4r8gj" podUID="0d035039-51b4-41aa-9e33-db0a1ca24332" Jan 22 10:27:43 crc kubenswrapper[4752]: E0122 10:27:43.640339 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-z4qst" podUID="d7873b3a-ffed-4c96-818c-90117b142098" Jan 22 10:27:43 crc kubenswrapper[4752]: E0122 10:27:43.640338 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-x65n2" podUID="eb0789a0-f347-4a6f-ba09-cd7cb558d5b5" Jan 22 10:27:43 crc kubenswrapper[4752]: E0122 10:27:43.654387 4752 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 22 10:27:43 crc kubenswrapper[4752]: E0122 10:27:43.654579 4752 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d2w2s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-dblhf_openshift-marketplace(41faa779-87e9-41e1-a547-feba13612d57): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 10:27:43 crc kubenswrapper[4752]: E0122 10:27:43.655806 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-dblhf" podUID="41faa779-87e9-41e1-a547-feba13612d57" Jan 22 10:27:43 crc kubenswrapper[4752]: E0122 10:27:43.744481 4752 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 22 10:27:43 crc kubenswrapper[4752]: E0122 10:27:43.745112 4752 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mh52l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-j4jbd_openshift-marketplace(0386a68f-2339-4ef6-8d96-e518b0682b4a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 10:27:43 crc kubenswrapper[4752]: E0122 10:27:43.746535 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-j4jbd" podUID="0386a68f-2339-4ef6-8d96-e518b0682b4a" Jan 22 10:27:43 crc kubenswrapper[4752]: E0122 10:27:43.759844 4752 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 22 10:27:43 crc kubenswrapper[4752]: E0122 10:27:43.760040 4752 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xks7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-x72kv_openshift-marketplace(71357b8c-126d-4119-943e-653febd0612d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 10:27:43 crc kubenswrapper[4752]: E0122 10:27:43.761629 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-x72kv" podUID="71357b8c-126d-4119-943e-653febd0612d" Jan 22 10:27:45 crc kubenswrapper[4752]: I0122 10:27:45.330182 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 22 10:27:45 crc kubenswrapper[4752]: I0122 10:27:45.330830 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 10:27:45 crc kubenswrapper[4752]: I0122 10:27:45.339116 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 22 10:27:45 crc kubenswrapper[4752]: I0122 10:27:45.448036 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/512fd0f5-4e67-429d-abe3-7eea327491ee-var-lock\") pod \"installer-9-crc\" (UID: \"512fd0f5-4e67-429d-abe3-7eea327491ee\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 10:27:45 crc kubenswrapper[4752]: I0122 10:27:45.448274 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/512fd0f5-4e67-429d-abe3-7eea327491ee-kube-api-access\") pod \"installer-9-crc\" (UID: \"512fd0f5-4e67-429d-abe3-7eea327491ee\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 10:27:45 crc kubenswrapper[4752]: I0122 10:27:45.448439 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/512fd0f5-4e67-429d-abe3-7eea327491ee-kubelet-dir\") pod \"installer-9-crc\" (UID: \"512fd0f5-4e67-429d-abe3-7eea327491ee\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 10:27:45 crc kubenswrapper[4752]: I0122 10:27:45.549379 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/512fd0f5-4e67-429d-abe3-7eea327491ee-kube-api-access\") pod \"installer-9-crc\" (UID: \"512fd0f5-4e67-429d-abe3-7eea327491ee\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 10:27:45 crc kubenswrapper[4752]: I0122 10:27:45.549486 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/512fd0f5-4e67-429d-abe3-7eea327491ee-kubelet-dir\") pod \"installer-9-crc\" (UID: \"512fd0f5-4e67-429d-abe3-7eea327491ee\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 10:27:45 crc kubenswrapper[4752]: I0122 10:27:45.549545 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/512fd0f5-4e67-429d-abe3-7eea327491ee-var-lock\") pod \"installer-9-crc\" (UID: \"512fd0f5-4e67-429d-abe3-7eea327491ee\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 10:27:45 crc kubenswrapper[4752]: I0122 10:27:45.550253 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/512fd0f5-4e67-429d-abe3-7eea327491ee-var-lock\") pod \"installer-9-crc\" (UID: \"512fd0f5-4e67-429d-abe3-7eea327491ee\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 10:27:45 crc kubenswrapper[4752]: I0122 10:27:45.551317 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/512fd0f5-4e67-429d-abe3-7eea327491ee-kubelet-dir\") pod \"installer-9-crc\" (UID: \"512fd0f5-4e67-429d-abe3-7eea327491ee\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 10:27:45 crc kubenswrapper[4752]: I0122 10:27:45.568403 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/512fd0f5-4e67-429d-abe3-7eea327491ee-kube-api-access\") pod \"installer-9-crc\" (UID: \"512fd0f5-4e67-429d-abe3-7eea327491ee\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 10:27:45 crc kubenswrapper[4752]: I0122 10:27:45.666992 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 10:27:47 crc kubenswrapper[4752]: I0122 10:27:47.069124 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:27:47 crc kubenswrapper[4752]: I0122 10:27:47.071455 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 22 10:27:47 crc kubenswrapper[4752]: I0122 10:27:47.088192 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:27:47 crc kubenswrapper[4752]: I0122 10:27:47.170491 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:27:47 crc kubenswrapper[4752]: I0122 10:27:47.171054 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:27:47 crc kubenswrapper[4752]: I0122 10:27:47.171089 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:27:47 crc kubenswrapper[4752]: I0122 10:27:47.174154 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 22 10:27:47 crc kubenswrapper[4752]: I0122 10:27:47.174278 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 22 10:27:47 crc kubenswrapper[4752]: I0122 10:27:47.182190 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:27:47 crc kubenswrapper[4752]: I0122 10:27:47.183285 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 22 10:27:47 crc kubenswrapper[4752]: I0122 10:27:47.196767 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:27:47 crc kubenswrapper[4752]: I0122 10:27:47.210498 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:27:47 crc kubenswrapper[4752]: I0122 10:27:47.232702 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:27:47 crc kubenswrapper[4752]: I0122 10:27:47.242441 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 10:27:47 crc kubenswrapper[4752]: I0122 10:27:47.468916 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 10:27:47 crc kubenswrapper[4752]: E0122 10:27:47.945799 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-x72kv" podUID="71357b8c-126d-4119-943e-653febd0612d" Jan 22 10:27:47 crc kubenswrapper[4752]: E0122 10:27:47.945839 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-j4jbd" podUID="0386a68f-2339-4ef6-8d96-e518b0682b4a" Jan 22 10:27:47 crc kubenswrapper[4752]: E0122 10:27:47.946201 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-dblhf" podUID="41faa779-87e9-41e1-a547-feba13612d57" Jan 22 10:27:48 crc kubenswrapper[4752]: I0122 10:27:48.338757 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 22 10:27:48 crc kubenswrapper[4752]: W0122 10:27:48.611019 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-12424949042f95b49c7aa776530a2424a4ad46f4ac29f411e8eaabcf865543ca WatchSource:0}: Error finding container 12424949042f95b49c7aa776530a2424a4ad46f4ac29f411e8eaabcf865543ca: Status 404 returned error can't find the container with id 12424949042f95b49c7aa776530a2424a4ad46f4ac29f411e8eaabcf865543ca Jan 22 10:27:48 crc kubenswrapper[4752]: W0122 10:27:48.639018 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-9ac4d5056fa2541861f2a023d16e7966a7b2219a2fccdbdc51cc302f1d88f943 WatchSource:0}: Error finding container 9ac4d5056fa2541861f2a023d16e7966a7b2219a2fccdbdc51cc302f1d88f943: Status 404 returned error can't find the container with id 9ac4d5056fa2541861f2a023d16e7966a7b2219a2fccdbdc51cc302f1d88f943 Jan 22 10:27:48 crc kubenswrapper[4752]: I0122 10:27:48.668058 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 22 10:27:48 crc kubenswrapper[4752]: I0122 10:27:48.792827 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"be4cd4a6-0f36-4347-8cde-c9b70de3a5e2","Type":"ContainerStarted","Data":"ed956ce1ebc034de79e512e7840dd3790f13f67a70c9d02cbd6af97cbe0a69e1"} Jan 22 10:27:48 crc kubenswrapper[4752]: I0122 10:27:48.793984 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"512fd0f5-4e67-429d-abe3-7eea327491ee","Type":"ContainerStarted","Data":"0ee9cd40774f4f49b647b4d5f89cdcc3c9c31a8421d572c0b82d673eb18e0778"} Jan 22 10:27:48 crc kubenswrapper[4752]: I0122 10:27:48.796418 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjvrm" event={"ID":"21c6ff12-b36a-4fdc-add4-027b31984b85","Type":"ContainerStarted","Data":"b1bf00f87f27d665f8a6c3ea8056f43dcc25405de494846d38d0b77eef52b6e9"} Jan 22 10:27:48 crc kubenswrapper[4752]: I0122 10:27:48.797621 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"9ac4d5056fa2541861f2a023d16e7966a7b2219a2fccdbdc51cc302f1d88f943"} Jan 22 10:27:48 crc kubenswrapper[4752]: I0122 10:27:48.798562 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"e5fdf16d815e9339aee24e352f9a349b7919802e27e0e883c493c56b2d634027"} Jan 22 10:27:48 crc kubenswrapper[4752]: I0122 10:27:48.811607 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpphf" event={"ID":"3c7b73c6-cc59-400a-858e-85af0b88a5b9","Type":"ContainerStarted","Data":"180e7716eb53c6ef72eb13c8061b8c9066fc56434fb92e8e07a32666258f1e45"} Jan 22 10:27:48 crc kubenswrapper[4752]: I0122 10:27:48.814594 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"12424949042f95b49c7aa776530a2424a4ad46f4ac29f411e8eaabcf865543ca"} Jan 22 10:27:49 crc kubenswrapper[4752]: I0122 10:27:49.832498 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"512fd0f5-4e67-429d-abe3-7eea327491ee","Type":"ContainerStarted","Data":"b3fa775059178e3bba58e652005cc6128692a4b3b536cfb69e936e119a8bf562"} Jan 22 10:27:49 crc kubenswrapper[4752]: I0122 10:27:49.838765 4752 generic.go:334] "Generic (PLEG): container finished" podID="21c6ff12-b36a-4fdc-add4-027b31984b85" containerID="b1bf00f87f27d665f8a6c3ea8056f43dcc25405de494846d38d0b77eef52b6e9" exitCode=0 Jan 22 10:27:49 crc kubenswrapper[4752]: I0122 10:27:49.838998 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjvrm" event={"ID":"21c6ff12-b36a-4fdc-add4-027b31984b85","Type":"ContainerDied","Data":"b1bf00f87f27d665f8a6c3ea8056f43dcc25405de494846d38d0b77eef52b6e9"} Jan 22 10:27:49 crc kubenswrapper[4752]: I0122 10:27:49.841678 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"10e4745846802ea6e63df4aad54e16171ff571b4cef537f524a4d657fe9c3d55"} Jan 22 10:27:49 crc kubenswrapper[4752]: I0122 10:27:49.852427 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=4.852411386 podStartE2EDuration="4.852411386s" podCreationTimestamp="2026-01-22 10:27:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:27:49.851017587 +0000 UTC m=+149.080960495" watchObservedRunningTime="2026-01-22 10:27:49.852411386 +0000 UTC m=+149.082354294" Jan 22 10:27:49 crc kubenswrapper[4752]: I0122 10:27:49.857308 4752 generic.go:334] "Generic (PLEG): container finished" podID="3c7b73c6-cc59-400a-858e-85af0b88a5b9" containerID="180e7716eb53c6ef72eb13c8061b8c9066fc56434fb92e8e07a32666258f1e45" exitCode=0 Jan 22 10:27:49 crc kubenswrapper[4752]: I0122 10:27:49.857398 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpphf" event={"ID":"3c7b73c6-cc59-400a-858e-85af0b88a5b9","Type":"ContainerDied","Data":"180e7716eb53c6ef72eb13c8061b8c9066fc56434fb92e8e07a32666258f1e45"} Jan 22 10:27:49 crc kubenswrapper[4752]: I0122 10:27:49.862691 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"05ec8e929097dcdb2a94d9473954d6d0b4152eba26cf213e789ed03c094b270c"} Jan 22 10:27:49 crc kubenswrapper[4752]: I0122 10:27:49.862832 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:27:49 crc kubenswrapper[4752]: I0122 10:27:49.867452 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"be4cd4a6-0f36-4347-8cde-c9b70de3a5e2","Type":"ContainerStarted","Data":"e766fa9fb5c1d9a990641a90caa4144935809ace4cba1af8e28882cc7d006444"} Jan 22 10:27:49 crc kubenswrapper[4752]: I0122 10:27:49.946786 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=10.946748987 podStartE2EDuration="10.946748987s" podCreationTimestamp="2026-01-22 10:27:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:27:49.934677894 +0000 UTC m=+149.164620812" watchObservedRunningTime="2026-01-22 10:27:49.946748987 +0000 UTC m=+149.176691905" Jan 22 10:27:50 crc kubenswrapper[4752]: I0122 10:27:50.875306 4752 generic.go:334] "Generic (PLEG): container finished" podID="be4cd4a6-0f36-4347-8cde-c9b70de3a5e2" containerID="e766fa9fb5c1d9a990641a90caa4144935809ace4cba1af8e28882cc7d006444" exitCode=0 Jan 22 10:27:50 crc kubenswrapper[4752]: I0122 10:27:50.875901 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"be4cd4a6-0f36-4347-8cde-c9b70de3a5e2","Type":"ContainerDied","Data":"e766fa9fb5c1d9a990641a90caa4144935809ace4cba1af8e28882cc7d006444"} Jan 22 10:27:50 crc kubenswrapper[4752]: I0122 10:27:50.880670 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjvrm" event={"ID":"21c6ff12-b36a-4fdc-add4-027b31984b85","Type":"ContainerStarted","Data":"1792be8e9bddfcf5a7f948b1975bde11480af466b3cf6188bad2f4dfc09f3a9f"} Jan 22 10:27:50 crc kubenswrapper[4752]: I0122 10:27:50.882671 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c054a41f3b7e21ad9218fc70eb6773cc970e01048811676238985fc89e040d1e"} Jan 22 10:27:50 crc kubenswrapper[4752]: I0122 10:27:50.888217 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpphf" event={"ID":"3c7b73c6-cc59-400a-858e-85af0b88a5b9","Type":"ContainerStarted","Data":"996cdda472a71efb9ecedd9d5bc44243492189a1d681caab808b37d573a3f902"} Jan 22 10:27:50 crc kubenswrapper[4752]: I0122 10:27:50.936564 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pjvrm" podStartSLOduration=4.134322461 podStartE2EDuration="49.936548242s" podCreationTimestamp="2026-01-22 10:27:01 +0000 UTC" firstStartedPulling="2026-01-22 10:27:04.451739586 +0000 UTC m=+103.681682494" lastFinishedPulling="2026-01-22 10:27:50.253965357 +0000 UTC m=+149.483908275" observedRunningTime="2026-01-22 10:27:50.933907057 +0000 UTC m=+150.163849965" watchObservedRunningTime="2026-01-22 10:27:50.936548242 +0000 UTC m=+150.166491160" Jan 22 10:27:50 crc kubenswrapper[4752]: I0122 10:27:50.956632 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lpphf" podStartSLOduration=4.093486315 podStartE2EDuration="49.956616042s" podCreationTimestamp="2026-01-22 10:27:01 +0000 UTC" firstStartedPulling="2026-01-22 10:27:04.458951505 +0000 UTC m=+103.688894413" lastFinishedPulling="2026-01-22 10:27:50.322081232 +0000 UTC m=+149.552024140" observedRunningTime="2026-01-22 10:27:50.955038637 +0000 UTC m=+150.184981545" watchObservedRunningTime="2026-01-22 10:27:50.956616042 +0000 UTC m=+150.186558950" Jan 22 10:27:51 crc kubenswrapper[4752]: I0122 10:27:51.951571 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lpphf" Jan 22 10:27:51 crc kubenswrapper[4752]: I0122 10:27:51.952227 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lpphf" Jan 22 10:27:52 crc kubenswrapper[4752]: I0122 10:27:52.119081 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 10:27:52 crc kubenswrapper[4752]: I0122 10:27:52.222652 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pjvrm" Jan 22 10:27:52 crc kubenswrapper[4752]: I0122 10:27:52.222691 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pjvrm" Jan 22 10:27:52 crc kubenswrapper[4752]: I0122 10:27:52.244063 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be4cd4a6-0f36-4347-8cde-c9b70de3a5e2-kube-api-access\") pod \"be4cd4a6-0f36-4347-8cde-c9b70de3a5e2\" (UID: \"be4cd4a6-0f36-4347-8cde-c9b70de3a5e2\") " Jan 22 10:27:52 crc kubenswrapper[4752]: I0122 10:27:52.244126 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be4cd4a6-0f36-4347-8cde-c9b70de3a5e2-kubelet-dir\") pod \"be4cd4a6-0f36-4347-8cde-c9b70de3a5e2\" (UID: \"be4cd4a6-0f36-4347-8cde-c9b70de3a5e2\") " Jan 22 10:27:52 crc kubenswrapper[4752]: I0122 10:27:52.244354 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be4cd4a6-0f36-4347-8cde-c9b70de3a5e2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "be4cd4a6-0f36-4347-8cde-c9b70de3a5e2" (UID: "be4cd4a6-0f36-4347-8cde-c9b70de3a5e2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:27:52 crc kubenswrapper[4752]: I0122 10:27:52.252828 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be4cd4a6-0f36-4347-8cde-c9b70de3a5e2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "be4cd4a6-0f36-4347-8cde-c9b70de3a5e2" (UID: "be4cd4a6-0f36-4347-8cde-c9b70de3a5e2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:27:52 crc kubenswrapper[4752]: I0122 10:27:52.345034 4752 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be4cd4a6-0f36-4347-8cde-c9b70de3a5e2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 10:27:52 crc kubenswrapper[4752]: I0122 10:27:52.345068 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be4cd4a6-0f36-4347-8cde-c9b70de3a5e2-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 10:27:52 crc kubenswrapper[4752]: I0122 10:27:52.903142 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"be4cd4a6-0f36-4347-8cde-c9b70de3a5e2","Type":"ContainerDied","Data":"ed956ce1ebc034de79e512e7840dd3790f13f67a70c9d02cbd6af97cbe0a69e1"} Jan 22 10:27:52 crc kubenswrapper[4752]: I0122 10:27:52.903364 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed956ce1ebc034de79e512e7840dd3790f13f67a70c9d02cbd6af97cbe0a69e1" Jan 22 10:27:52 crc kubenswrapper[4752]: I0122 10:27:52.903401 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 10:27:53 crc kubenswrapper[4752]: I0122 10:27:53.010568 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lpphf" podUID="3c7b73c6-cc59-400a-858e-85af0b88a5b9" containerName="registry-server" probeResult="failure" output=< Jan 22 10:27:53 crc kubenswrapper[4752]: timeout: failed to connect service ":50051" within 1s Jan 22 10:27:53 crc kubenswrapper[4752]: > Jan 22 10:27:53 crc kubenswrapper[4752]: I0122 10:27:53.262563 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pjvrm" podUID="21c6ff12-b36a-4fdc-add4-027b31984b85" containerName="registry-server" probeResult="failure" output=< Jan 22 10:27:53 crc kubenswrapper[4752]: timeout: failed to connect service ":50051" within 1s Jan 22 10:27:53 crc kubenswrapper[4752]: > Jan 22 10:27:57 crc kubenswrapper[4752]: I0122 10:27:57.723993 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:27:57 crc kubenswrapper[4752]: I0122 10:27:57.724312 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:28:03 crc kubenswrapper[4752]: I0122 10:28:03.004206 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lpphf" podUID="3c7b73c6-cc59-400a-858e-85af0b88a5b9" containerName="registry-server" probeResult="failure" output=< Jan 22 10:28:03 crc kubenswrapper[4752]: timeout: failed to connect service ":50051" within 1s Jan 22 10:28:03 crc kubenswrapper[4752]: > Jan 22 10:28:03 crc kubenswrapper[4752]: I0122 10:28:03.256929 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pjvrm" podUID="21c6ff12-b36a-4fdc-add4-027b31984b85" containerName="registry-server" probeResult="failure" output=< Jan 22 10:28:03 crc kubenswrapper[4752]: timeout: failed to connect service ":50051" within 1s Jan 22 10:28:03 crc kubenswrapper[4752]: > Jan 22 10:28:12 crc kubenswrapper[4752]: I0122 10:28:12.010613 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lpphf" Jan 22 10:28:12 crc kubenswrapper[4752]: I0122 10:28:12.057479 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lpphf" Jan 22 10:28:12 crc kubenswrapper[4752]: I0122 10:28:12.260071 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pjvrm" Jan 22 10:28:12 crc kubenswrapper[4752]: I0122 10:28:12.297670 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pjvrm" Jan 22 10:28:13 crc kubenswrapper[4752]: I0122 10:28:13.254170 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pjvrm"] Jan 22 10:28:14 crc kubenswrapper[4752]: I0122 10:28:14.032784 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pjvrm" podUID="21c6ff12-b36a-4fdc-add4-027b31984b85" containerName="registry-server" containerID="cri-o://1792be8e9bddfcf5a7f948b1975bde11480af466b3cf6188bad2f4dfc09f3a9f" gracePeriod=2 Jan 22 10:28:16 crc kubenswrapper[4752]: I0122 10:28:16.046502 4752 generic.go:334] "Generic (PLEG): container finished" podID="21c6ff12-b36a-4fdc-add4-027b31984b85" containerID="1792be8e9bddfcf5a7f948b1975bde11480af466b3cf6188bad2f4dfc09f3a9f" exitCode=0 Jan 22 10:28:16 crc kubenswrapper[4752]: I0122 10:28:16.046555 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjvrm" event={"ID":"21c6ff12-b36a-4fdc-add4-027b31984b85","Type":"ContainerDied","Data":"1792be8e9bddfcf5a7f948b1975bde11480af466b3cf6188bad2f4dfc09f3a9f"} Jan 22 10:28:19 crc kubenswrapper[4752]: I0122 10:28:19.890376 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-g8qkl"] Jan 22 10:28:19 crc kubenswrapper[4752]: E0122 10:28:19.891106 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4cd4a6-0f36-4347-8cde-c9b70de3a5e2" containerName="pruner" Jan 22 10:28:19 crc kubenswrapper[4752]: I0122 10:28:19.891118 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4cd4a6-0f36-4347-8cde-c9b70de3a5e2" containerName="pruner" Jan 22 10:28:19 crc kubenswrapper[4752]: I0122 10:28:19.891226 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="be4cd4a6-0f36-4347-8cde-c9b70de3a5e2" containerName="pruner" Jan 22 10:28:19 crc kubenswrapper[4752]: I0122 10:28:19.891586 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" Jan 22 10:28:19 crc kubenswrapper[4752]: I0122 10:28:19.902933 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-g8qkl"] Jan 22 10:28:19 crc kubenswrapper[4752]: I0122 10:28:19.922946 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/845573f7-b60d-40ce-8a84-45507baa3934-registry-tls\") pod \"image-registry-66df7c8f76-g8qkl\" (UID: \"845573f7-b60d-40ce-8a84-45507baa3934\") " pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" Jan 22 10:28:19 crc kubenswrapper[4752]: I0122 10:28:19.922996 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/845573f7-b60d-40ce-8a84-45507baa3934-bound-sa-token\") pod \"image-registry-66df7c8f76-g8qkl\" (UID: \"845573f7-b60d-40ce-8a84-45507baa3934\") " pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" Jan 22 10:28:19 crc kubenswrapper[4752]: I0122 10:28:19.923041 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/845573f7-b60d-40ce-8a84-45507baa3934-trusted-ca\") pod \"image-registry-66df7c8f76-g8qkl\" (UID: \"845573f7-b60d-40ce-8a84-45507baa3934\") " pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" Jan 22 10:28:19 crc kubenswrapper[4752]: I0122 10:28:19.923069 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzhgq\" (UniqueName: \"kubernetes.io/projected/845573f7-b60d-40ce-8a84-45507baa3934-kube-api-access-tzhgq\") pod \"image-registry-66df7c8f76-g8qkl\" (UID: \"845573f7-b60d-40ce-8a84-45507baa3934\") " pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" Jan 22 10:28:19 crc kubenswrapper[4752]: I0122 10:28:19.923096 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/845573f7-b60d-40ce-8a84-45507baa3934-registry-certificates\") pod \"image-registry-66df7c8f76-g8qkl\" (UID: \"845573f7-b60d-40ce-8a84-45507baa3934\") " pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" Jan 22 10:28:19 crc kubenswrapper[4752]: I0122 10:28:19.923120 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/845573f7-b60d-40ce-8a84-45507baa3934-installation-pull-secrets\") pod \"image-registry-66df7c8f76-g8qkl\" (UID: \"845573f7-b60d-40ce-8a84-45507baa3934\") " pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" Jan 22 10:28:19 crc kubenswrapper[4752]: I0122 10:28:19.923228 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-g8qkl\" (UID: \"845573f7-b60d-40ce-8a84-45507baa3934\") " pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" Jan 22 10:28:19 crc kubenswrapper[4752]: I0122 10:28:19.923336 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/845573f7-b60d-40ce-8a84-45507baa3934-ca-trust-extracted\") pod \"image-registry-66df7c8f76-g8qkl\" (UID: \"845573f7-b60d-40ce-8a84-45507baa3934\") " pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" Jan 22 10:28:19 crc kubenswrapper[4752]: I0122 10:28:19.942040 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-g8qkl\" (UID: \"845573f7-b60d-40ce-8a84-45507baa3934\") " pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" Jan 22 10:28:20 crc kubenswrapper[4752]: I0122 10:28:20.025059 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/845573f7-b60d-40ce-8a84-45507baa3934-registry-tls\") pod \"image-registry-66df7c8f76-g8qkl\" (UID: \"845573f7-b60d-40ce-8a84-45507baa3934\") " pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" Jan 22 10:28:20 crc kubenswrapper[4752]: I0122 10:28:20.025112 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/845573f7-b60d-40ce-8a84-45507baa3934-bound-sa-token\") pod \"image-registry-66df7c8f76-g8qkl\" (UID: \"845573f7-b60d-40ce-8a84-45507baa3934\") " pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" Jan 22 10:28:20 crc kubenswrapper[4752]: I0122 10:28:20.025145 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/845573f7-b60d-40ce-8a84-45507baa3934-trusted-ca\") pod \"image-registry-66df7c8f76-g8qkl\" (UID: \"845573f7-b60d-40ce-8a84-45507baa3934\") " pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" Jan 22 10:28:20 crc kubenswrapper[4752]: I0122 10:28:20.025164 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzhgq\" (UniqueName: \"kubernetes.io/projected/845573f7-b60d-40ce-8a84-45507baa3934-kube-api-access-tzhgq\") pod \"image-registry-66df7c8f76-g8qkl\" (UID: \"845573f7-b60d-40ce-8a84-45507baa3934\") " pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" Jan 22 10:28:20 crc kubenswrapper[4752]: I0122 10:28:20.025183 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/845573f7-b60d-40ce-8a84-45507baa3934-registry-certificates\") pod \"image-registry-66df7c8f76-g8qkl\" (UID: \"845573f7-b60d-40ce-8a84-45507baa3934\") " pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" Jan 22 10:28:20 crc kubenswrapper[4752]: I0122 10:28:20.025200 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/845573f7-b60d-40ce-8a84-45507baa3934-installation-pull-secrets\") pod \"image-registry-66df7c8f76-g8qkl\" (UID: \"845573f7-b60d-40ce-8a84-45507baa3934\") " pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" Jan 22 10:28:20 crc kubenswrapper[4752]: I0122 10:28:20.025238 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/845573f7-b60d-40ce-8a84-45507baa3934-ca-trust-extracted\") pod \"image-registry-66df7c8f76-g8qkl\" (UID: \"845573f7-b60d-40ce-8a84-45507baa3934\") " pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" Jan 22 10:28:20 crc kubenswrapper[4752]: I0122 10:28:20.025705 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/845573f7-b60d-40ce-8a84-45507baa3934-ca-trust-extracted\") pod \"image-registry-66df7c8f76-g8qkl\" (UID: \"845573f7-b60d-40ce-8a84-45507baa3934\") " pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" Jan 22 10:28:20 crc kubenswrapper[4752]: I0122 10:28:20.027014 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/845573f7-b60d-40ce-8a84-45507baa3934-registry-certificates\") pod \"image-registry-66df7c8f76-g8qkl\" (UID: \"845573f7-b60d-40ce-8a84-45507baa3934\") " pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" Jan 22 10:28:20 crc kubenswrapper[4752]: I0122 10:28:20.027368 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/845573f7-b60d-40ce-8a84-45507baa3934-trusted-ca\") pod \"image-registry-66df7c8f76-g8qkl\" (UID: \"845573f7-b60d-40ce-8a84-45507baa3934\") " pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" Jan 22 10:28:20 crc kubenswrapper[4752]: I0122 10:28:20.030937 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/845573f7-b60d-40ce-8a84-45507baa3934-registry-tls\") pod \"image-registry-66df7c8f76-g8qkl\" (UID: \"845573f7-b60d-40ce-8a84-45507baa3934\") " pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" Jan 22 10:28:20 crc kubenswrapper[4752]: I0122 10:28:20.035399 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/845573f7-b60d-40ce-8a84-45507baa3934-installation-pull-secrets\") pod \"image-registry-66df7c8f76-g8qkl\" (UID: \"845573f7-b60d-40ce-8a84-45507baa3934\") " pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" Jan 22 10:28:20 crc kubenswrapper[4752]: I0122 10:28:20.054665 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/845573f7-b60d-40ce-8a84-45507baa3934-bound-sa-token\") pod \"image-registry-66df7c8f76-g8qkl\" (UID: \"845573f7-b60d-40ce-8a84-45507baa3934\") " pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" Jan 22 10:28:20 crc kubenswrapper[4752]: I0122 10:28:20.054928 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzhgq\" (UniqueName: \"kubernetes.io/projected/845573f7-b60d-40ce-8a84-45507baa3934-kube-api-access-tzhgq\") pod \"image-registry-66df7c8f76-g8qkl\" (UID: \"845573f7-b60d-40ce-8a84-45507baa3934\") " pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" Jan 22 10:28:20 crc kubenswrapper[4752]: I0122 10:28:20.212556 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" Jan 22 10:28:21 crc kubenswrapper[4752]: I0122 10:28:21.754314 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pjvrm" Jan 22 10:28:21 crc kubenswrapper[4752]: I0122 10:28:21.851799 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21c6ff12-b36a-4fdc-add4-027b31984b85-catalog-content\") pod \"21c6ff12-b36a-4fdc-add4-027b31984b85\" (UID: \"21c6ff12-b36a-4fdc-add4-027b31984b85\") " Jan 22 10:28:21 crc kubenswrapper[4752]: I0122 10:28:21.851912 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nkzs\" (UniqueName: \"kubernetes.io/projected/21c6ff12-b36a-4fdc-add4-027b31984b85-kube-api-access-9nkzs\") pod \"21c6ff12-b36a-4fdc-add4-027b31984b85\" (UID: \"21c6ff12-b36a-4fdc-add4-027b31984b85\") " Jan 22 10:28:21 crc kubenswrapper[4752]: I0122 10:28:21.851981 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21c6ff12-b36a-4fdc-add4-027b31984b85-utilities\") pod \"21c6ff12-b36a-4fdc-add4-027b31984b85\" (UID: \"21c6ff12-b36a-4fdc-add4-027b31984b85\") " Jan 22 10:28:21 crc kubenswrapper[4752]: I0122 10:28:21.852922 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21c6ff12-b36a-4fdc-add4-027b31984b85-utilities" (OuterVolumeSpecName: "utilities") pod "21c6ff12-b36a-4fdc-add4-027b31984b85" (UID: "21c6ff12-b36a-4fdc-add4-027b31984b85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:28:21 crc kubenswrapper[4752]: I0122 10:28:21.859105 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21c6ff12-b36a-4fdc-add4-027b31984b85-kube-api-access-9nkzs" (OuterVolumeSpecName: "kube-api-access-9nkzs") pod "21c6ff12-b36a-4fdc-add4-027b31984b85" (UID: "21c6ff12-b36a-4fdc-add4-027b31984b85"). InnerVolumeSpecName "kube-api-access-9nkzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:28:21 crc kubenswrapper[4752]: I0122 10:28:21.953450 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nkzs\" (UniqueName: \"kubernetes.io/projected/21c6ff12-b36a-4fdc-add4-027b31984b85-kube-api-access-9nkzs\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:21 crc kubenswrapper[4752]: I0122 10:28:21.953487 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21c6ff12-b36a-4fdc-add4-027b31984b85-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:21 crc kubenswrapper[4752]: I0122 10:28:21.964223 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21c6ff12-b36a-4fdc-add4-027b31984b85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "21c6ff12-b36a-4fdc-add4-027b31984b85" (UID: "21c6ff12-b36a-4fdc-add4-027b31984b85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:28:22 crc kubenswrapper[4752]: I0122 10:28:22.056833 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21c6ff12-b36a-4fdc-add4-027b31984b85-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:22 crc kubenswrapper[4752]: I0122 10:28:22.080226 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjvrm" event={"ID":"21c6ff12-b36a-4fdc-add4-027b31984b85","Type":"ContainerDied","Data":"edb174c28c25cd4b1d557471012e8664a13eac171ea51f86664ff3d312ece8a4"} Jan 22 10:28:22 crc kubenswrapper[4752]: I0122 10:28:22.080271 4752 scope.go:117] "RemoveContainer" containerID="1792be8e9bddfcf5a7f948b1975bde11480af466b3cf6188bad2f4dfc09f3a9f" Jan 22 10:28:22 crc kubenswrapper[4752]: I0122 10:28:22.080368 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pjvrm" Jan 22 10:28:22 crc kubenswrapper[4752]: I0122 10:28:22.113145 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pjvrm"] Jan 22 10:28:22 crc kubenswrapper[4752]: I0122 10:28:22.114182 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pjvrm"] Jan 22 10:28:22 crc kubenswrapper[4752]: I0122 10:28:22.170361 4752 scope.go:117] "RemoveContainer" containerID="b1bf00f87f27d665f8a6c3ea8056f43dcc25405de494846d38d0b77eef52b6e9" Jan 22 10:28:22 crc kubenswrapper[4752]: I0122 10:28:22.215889 4752 scope.go:117] "RemoveContainer" containerID="3dabd32dbf38b5d732e7fa393782acdaff5b346d8e7f4d1936d8959898856eb3" Jan 22 10:28:22 crc kubenswrapper[4752]: I0122 10:28:22.452877 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-g8qkl"] Jan 22 10:28:22 crc kubenswrapper[4752]: E0122 10:28:22.903482 4752 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0386a68f_2339_4ef6_8d96_e518b0682b4a.slice/crio-conmon-2b31dc47d0f37e6e7aa0e57f4bc3507bb929ad37d69d769c0aa8b52117cd6949.scope\": RecentStats: unable to find data in memory cache]" Jan 22 10:28:23 crc kubenswrapper[4752]: I0122 10:28:23.088685 4752 generic.go:334] "Generic (PLEG): container finished" podID="eb0789a0-f347-4a6f-ba09-cd7cb558d5b5" containerID="8b7949b92af25723b2556d2ddd46a319eeaaea1544ffe90a83189161d21f742b" exitCode=0 Jan 22 10:28:23 crc kubenswrapper[4752]: I0122 10:28:23.088764 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x65n2" event={"ID":"eb0789a0-f347-4a6f-ba09-cd7cb558d5b5","Type":"ContainerDied","Data":"8b7949b92af25723b2556d2ddd46a319eeaaea1544ffe90a83189161d21f742b"} Jan 22 10:28:23 crc kubenswrapper[4752]: I0122 10:28:23.093953 4752 generic.go:334] "Generic (PLEG): container finished" podID="d7873b3a-ffed-4c96-818c-90117b142098" containerID="a0bc945ba63b06e35aa199c887938a8032e87e0d12949f86cd65df9ab560e852" exitCode=0 Jan 22 10:28:23 crc kubenswrapper[4752]: I0122 10:28:23.093987 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z4qst" event={"ID":"d7873b3a-ffed-4c96-818c-90117b142098","Type":"ContainerDied","Data":"a0bc945ba63b06e35aa199c887938a8032e87e0d12949f86cd65df9ab560e852"} Jan 22 10:28:23 crc kubenswrapper[4752]: I0122 10:28:23.112876 4752 generic.go:334] "Generic (PLEG): container finished" podID="41faa779-87e9-41e1-a547-feba13612d57" containerID="2a4e26a274b6134b193c177d159564b749dab10fc7e984d2747ef2bc60b23d6d" exitCode=0 Jan 22 10:28:23 crc kubenswrapper[4752]: I0122 10:28:23.117196 4752 generic.go:334] "Generic (PLEG): container finished" podID="0d035039-51b4-41aa-9e33-db0a1ca24332" containerID="9c357e0e210f3b1838c82e71516ade6ab4ade350c681e700976d64244d5b39de" exitCode=0 Jan 22 10:28:23 crc kubenswrapper[4752]: I0122 10:28:23.121183 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21c6ff12-b36a-4fdc-add4-027b31984b85" path="/var/lib/kubelet/pods/21c6ff12-b36a-4fdc-add4-027b31984b85/volumes" Jan 22 10:28:23 crc kubenswrapper[4752]: I0122 10:28:23.122028 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dblhf" event={"ID":"41faa779-87e9-41e1-a547-feba13612d57","Type":"ContainerDied","Data":"2a4e26a274b6134b193c177d159564b749dab10fc7e984d2747ef2bc60b23d6d"} Jan 22 10:28:23 crc kubenswrapper[4752]: I0122 10:28:23.122181 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4r8gj" event={"ID":"0d035039-51b4-41aa-9e33-db0a1ca24332","Type":"ContainerDied","Data":"9c357e0e210f3b1838c82e71516ade6ab4ade350c681e700976d64244d5b39de"} Jan 22 10:28:23 crc kubenswrapper[4752]: I0122 10:28:23.122292 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" event={"ID":"845573f7-b60d-40ce-8a84-45507baa3934","Type":"ContainerStarted","Data":"845ea1476fff701cb57b7007d19d512948b494929797352ad4ca8813c7722952"} Jan 22 10:28:23 crc kubenswrapper[4752]: I0122 10:28:23.122427 4752 generic.go:334] "Generic (PLEG): container finished" podID="0386a68f-2339-4ef6-8d96-e518b0682b4a" containerID="2b31dc47d0f37e6e7aa0e57f4bc3507bb929ad37d69d769c0aa8b52117cd6949" exitCode=0 Jan 22 10:28:23 crc kubenswrapper[4752]: I0122 10:28:23.122513 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" Jan 22 10:28:23 crc kubenswrapper[4752]: I0122 10:28:23.122561 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" event={"ID":"845573f7-b60d-40ce-8a84-45507baa3934","Type":"ContainerStarted","Data":"1316d5a069354b8031dd869640968a9608f032239b36f8a09e955948fcbcc596"} Jan 22 10:28:23 crc kubenswrapper[4752]: I0122 10:28:23.122582 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4jbd" event={"ID":"0386a68f-2339-4ef6-8d96-e518b0682b4a","Type":"ContainerDied","Data":"2b31dc47d0f37e6e7aa0e57f4bc3507bb929ad37d69d769c0aa8b52117cd6949"} Jan 22 10:28:23 crc kubenswrapper[4752]: I0122 10:28:23.131637 4752 generic.go:334] "Generic (PLEG): container finished" podID="71357b8c-126d-4119-943e-653febd0612d" containerID="02bc51a2f7aab3ebc65d682a36d0a264b44088c971b43715d548f5d39db4ee12" exitCode=0 Jan 22 10:28:23 crc kubenswrapper[4752]: I0122 10:28:23.132201 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x72kv" event={"ID":"71357b8c-126d-4119-943e-653febd0612d","Type":"ContainerDied","Data":"02bc51a2f7aab3ebc65d682a36d0a264b44088c971b43715d548f5d39db4ee12"} Jan 22 10:28:23 crc kubenswrapper[4752]: I0122 10:28:23.210514 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" podStartSLOduration=4.210486366 podStartE2EDuration="4.210486366s" podCreationTimestamp="2026-01-22 10:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:28:23.207751668 +0000 UTC m=+182.437694586" watchObservedRunningTime="2026-01-22 10:28:23.210486366 +0000 UTC m=+182.440429274" Jan 22 10:28:24 crc kubenswrapper[4752]: I0122 10:28:24.140119 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x65n2" event={"ID":"eb0789a0-f347-4a6f-ba09-cd7cb558d5b5","Type":"ContainerStarted","Data":"546483283ddbe83b198683280058b60265726d2659ff4d9dacb3cb1aa0336bf6"} Jan 22 10:28:24 crc kubenswrapper[4752]: I0122 10:28:24.143576 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z4qst" event={"ID":"d7873b3a-ffed-4c96-818c-90117b142098","Type":"ContainerStarted","Data":"9dcea2c6acb7b4c2390a7defdea1ef3c4c04be179397d548862624a19da216e3"} Jan 22 10:28:24 crc kubenswrapper[4752]: I0122 10:28:24.146111 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dblhf" event={"ID":"41faa779-87e9-41e1-a547-feba13612d57","Type":"ContainerStarted","Data":"3759f70ce942e032865509c98b807fd81c3d23d29562d220cdfb5ad48fb84579"} Jan 22 10:28:24 crc kubenswrapper[4752]: I0122 10:28:24.148416 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4r8gj" event={"ID":"0d035039-51b4-41aa-9e33-db0a1ca24332","Type":"ContainerStarted","Data":"ecf84d21641b383b9ecf1c8911e63d55aa2c977315ca6d29ac320d75d33bae49"} Jan 22 10:28:24 crc kubenswrapper[4752]: I0122 10:28:24.152047 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4jbd" event={"ID":"0386a68f-2339-4ef6-8d96-e518b0682b4a","Type":"ContainerStarted","Data":"0bec74f3ce8596e2596893804ed9c55064e49bb92c2e7bfa89b7508ac92e80ff"} Jan 22 10:28:24 crc kubenswrapper[4752]: I0122 10:28:24.155083 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x72kv" event={"ID":"71357b8c-126d-4119-943e-653febd0612d","Type":"ContainerStarted","Data":"3b967c1968d70ece4a0d95ea666394d9a9ae3d92b33dba9effab6ec240e14391"} Jan 22 10:28:24 crc kubenswrapper[4752]: I0122 10:28:24.161441 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x65n2" podStartSLOduration=2.943962555 podStartE2EDuration="1m26.161420807s" podCreationTimestamp="2026-01-22 10:26:58 +0000 UTC" firstStartedPulling="2026-01-22 10:27:00.274671287 +0000 UTC m=+99.504614185" lastFinishedPulling="2026-01-22 10:28:23.492129529 +0000 UTC m=+182.722072437" observedRunningTime="2026-01-22 10:28:24.158766832 +0000 UTC m=+183.388709740" watchObservedRunningTime="2026-01-22 10:28:24.161420807 +0000 UTC m=+183.391363715" Jan 22 10:28:24 crc kubenswrapper[4752]: I0122 10:28:24.185539 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x72kv" podStartSLOduration=2.837226423 podStartE2EDuration="1m26.185514572s" podCreationTimestamp="2026-01-22 10:26:58 +0000 UTC" firstStartedPulling="2026-01-22 10:27:00.274294867 +0000 UTC m=+99.504237775" lastFinishedPulling="2026-01-22 10:28:23.622583016 +0000 UTC m=+182.852525924" observedRunningTime="2026-01-22 10:28:24.183426522 +0000 UTC m=+183.413369440" watchObservedRunningTime="2026-01-22 10:28:24.185514572 +0000 UTC m=+183.415457480" Jan 22 10:28:24 crc kubenswrapper[4752]: I0122 10:28:24.204307 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4r8gj" podStartSLOduration=2.937132913 podStartE2EDuration="1m26.204287795s" podCreationTimestamp="2026-01-22 10:26:58 +0000 UTC" firstStartedPulling="2026-01-22 10:27:00.252379495 +0000 UTC m=+99.482322393" lastFinishedPulling="2026-01-22 10:28:23.519534367 +0000 UTC m=+182.749477275" observedRunningTime="2026-01-22 10:28:24.200812676 +0000 UTC m=+183.430755594" watchObservedRunningTime="2026-01-22 10:28:24.204287795 +0000 UTC m=+183.434230703" Jan 22 10:28:24 crc kubenswrapper[4752]: I0122 10:28:24.243167 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j4jbd" podStartSLOduration=3.06297953 podStartE2EDuration="1m24.243153259s" podCreationTimestamp="2026-01-22 10:27:00 +0000 UTC" firstStartedPulling="2026-01-22 10:27:02.406517687 +0000 UTC m=+101.636460585" lastFinishedPulling="2026-01-22 10:28:23.586691406 +0000 UTC m=+182.816634314" observedRunningTime="2026-01-22 10:28:24.221072752 +0000 UTC m=+183.451015670" watchObservedRunningTime="2026-01-22 10:28:24.243153259 +0000 UTC m=+183.473096167" Jan 22 10:28:24 crc kubenswrapper[4752]: I0122 10:28:24.244506 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dblhf" podStartSLOduration=2.948799675 podStartE2EDuration="1m24.244501008s" podCreationTimestamp="2026-01-22 10:27:00 +0000 UTC" firstStartedPulling="2026-01-22 10:27:02.38556467 +0000 UTC m=+101.615507578" lastFinishedPulling="2026-01-22 10:28:23.681266003 +0000 UTC m=+182.911208911" observedRunningTime="2026-01-22 10:28:24.242022117 +0000 UTC m=+183.471965025" watchObservedRunningTime="2026-01-22 10:28:24.244501008 +0000 UTC m=+183.474443916" Jan 22 10:28:24 crc kubenswrapper[4752]: I0122 10:28:24.265293 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z4qst" podStartSLOduration=3.033634114 podStartE2EDuration="1m26.265278848s" podCreationTimestamp="2026-01-22 10:26:58 +0000 UTC" firstStartedPulling="2026-01-22 10:27:00.278719373 +0000 UTC m=+99.508662281" lastFinishedPulling="2026-01-22 10:28:23.510364107 +0000 UTC m=+182.740307015" observedRunningTime="2026-01-22 10:28:24.265258998 +0000 UTC m=+183.495201906" watchObservedRunningTime="2026-01-22 10:28:24.265278848 +0000 UTC m=+183.495221756" Jan 22 10:28:26 crc kubenswrapper[4752]: I0122 10:28:26.605220 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zw6f2"] Jan 22 10:28:26 crc kubenswrapper[4752]: I0122 10:28:26.892016 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x65n2"] Jan 22 10:28:26 crc kubenswrapper[4752]: I0122 10:28:26.892654 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x65n2" podUID="eb0789a0-f347-4a6f-ba09-cd7cb558d5b5" containerName="registry-server" containerID="cri-o://546483283ddbe83b198683280058b60265726d2659ff4d9dacb3cb1aa0336bf6" gracePeriod=30 Jan 22 10:28:26 crc kubenswrapper[4752]: I0122 10:28:26.901924 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x72kv"] Jan 22 10:28:26 crc kubenswrapper[4752]: I0122 10:28:26.902119 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x72kv" podUID="71357b8c-126d-4119-943e-653febd0612d" containerName="registry-server" containerID="cri-o://3b967c1968d70ece4a0d95ea666394d9a9ae3d92b33dba9effab6ec240e14391" gracePeriod=30 Jan 22 10:28:26 crc kubenswrapper[4752]: I0122 10:28:26.912804 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4r8gj"] Jan 22 10:28:26 crc kubenswrapper[4752]: I0122 10:28:26.913063 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4r8gj" podUID="0d035039-51b4-41aa-9e33-db0a1ca24332" containerName="registry-server" containerID="cri-o://ecf84d21641b383b9ecf1c8911e63d55aa2c977315ca6d29ac320d75d33bae49" gracePeriod=30 Jan 22 10:28:26 crc kubenswrapper[4752]: I0122 10:28:26.921413 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z4qst"] Jan 22 10:28:26 crc kubenswrapper[4752]: I0122 10:28:26.921635 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z4qst" podUID="d7873b3a-ffed-4c96-818c-90117b142098" containerName="registry-server" containerID="cri-o://9dcea2c6acb7b4c2390a7defdea1ef3c4c04be179397d548862624a19da216e3" gracePeriod=30 Jan 22 10:28:26 crc kubenswrapper[4752]: I0122 10:28:26.932479 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gx9kk"] Jan 22 10:28:26 crc kubenswrapper[4752]: I0122 10:28:26.932799 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-gx9kk" podUID="4bba5d5d-5eca-4d85-a64d-e127c8fe0e98" containerName="marketplace-operator" containerID="cri-o://7faf98919cf1ce94830ec9853f51cd485e85d8422b5cdca0b02a264d72e5af59" gracePeriod=30 Jan 22 10:28:26 crc kubenswrapper[4752]: I0122 10:28:26.941409 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dblhf"] Jan 22 10:28:26 crc kubenswrapper[4752]: I0122 10:28:26.941748 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dblhf" podUID="41faa779-87e9-41e1-a547-feba13612d57" containerName="registry-server" containerID="cri-o://3759f70ce942e032865509c98b807fd81c3d23d29562d220cdfb5ad48fb84579" gracePeriod=30 Jan 22 10:28:26 crc kubenswrapper[4752]: I0122 10:28:26.956845 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4jbd"] Jan 22 10:28:26 crc kubenswrapper[4752]: I0122 10:28:26.957815 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j4jbd" podUID="0386a68f-2339-4ef6-8d96-e518b0682b4a" containerName="registry-server" containerID="cri-o://0bec74f3ce8596e2596893804ed9c55064e49bb92c2e7bfa89b7508ac92e80ff" gracePeriod=30 Jan 22 10:28:26 crc kubenswrapper[4752]: I0122 10:28:26.978953 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dx6f6"] Jan 22 10:28:26 crc kubenswrapper[4752]: E0122 10:28:26.979243 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21c6ff12-b36a-4fdc-add4-027b31984b85" containerName="extract-content" Jan 22 10:28:26 crc kubenswrapper[4752]: I0122 10:28:26.979264 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="21c6ff12-b36a-4fdc-add4-027b31984b85" containerName="extract-content" Jan 22 10:28:26 crc kubenswrapper[4752]: E0122 10:28:26.979280 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21c6ff12-b36a-4fdc-add4-027b31984b85" containerName="extract-utilities" Jan 22 10:28:26 crc kubenswrapper[4752]: I0122 10:28:26.979289 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="21c6ff12-b36a-4fdc-add4-027b31984b85" containerName="extract-utilities" Jan 22 10:28:26 crc kubenswrapper[4752]: E0122 10:28:26.979299 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21c6ff12-b36a-4fdc-add4-027b31984b85" containerName="registry-server" Jan 22 10:28:26 crc kubenswrapper[4752]: I0122 10:28:26.979305 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="21c6ff12-b36a-4fdc-add4-027b31984b85" containerName="registry-server" Jan 22 10:28:26 crc kubenswrapper[4752]: I0122 10:28:26.979403 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="21c6ff12-b36a-4fdc-add4-027b31984b85" containerName="registry-server" Jan 22 10:28:26 crc kubenswrapper[4752]: I0122 10:28:26.979876 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dx6f6" Jan 22 10:28:26 crc kubenswrapper[4752]: I0122 10:28:26.980236 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lpphf"] Jan 22 10:28:26 crc kubenswrapper[4752]: I0122 10:28:26.980475 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lpphf" podUID="3c7b73c6-cc59-400a-858e-85af0b88a5b9" containerName="registry-server" containerID="cri-o://996cdda472a71efb9ecedd9d5bc44243492189a1d681caab808b37d573a3f902" gracePeriod=30 Jan 22 10:28:26 crc kubenswrapper[4752]: I0122 10:28:26.990106 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dx6f6"] Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.124756 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/565ffec2-a221-48d7-b657-a59c7dff1de1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dx6f6\" (UID: \"565ffec2-a221-48d7-b657-a59c7dff1de1\") " pod="openshift-marketplace/marketplace-operator-79b997595-dx6f6" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.124868 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/565ffec2-a221-48d7-b657-a59c7dff1de1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dx6f6\" (UID: \"565ffec2-a221-48d7-b657-a59c7dff1de1\") " pod="openshift-marketplace/marketplace-operator-79b997595-dx6f6" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.124912 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q7gr\" (UniqueName: \"kubernetes.io/projected/565ffec2-a221-48d7-b657-a59c7dff1de1-kube-api-access-8q7gr\") pod \"marketplace-operator-79b997595-dx6f6\" (UID: \"565ffec2-a221-48d7-b657-a59c7dff1de1\") " pod="openshift-marketplace/marketplace-operator-79b997595-dx6f6" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.176347 4752 generic.go:334] "Generic (PLEG): container finished" podID="0d035039-51b4-41aa-9e33-db0a1ca24332" containerID="ecf84d21641b383b9ecf1c8911e63d55aa2c977315ca6d29ac320d75d33bae49" exitCode=0 Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.176422 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4r8gj" event={"ID":"0d035039-51b4-41aa-9e33-db0a1ca24332","Type":"ContainerDied","Data":"ecf84d21641b383b9ecf1c8911e63d55aa2c977315ca6d29ac320d75d33bae49"} Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.178415 4752 generic.go:334] "Generic (PLEG): container finished" podID="4bba5d5d-5eca-4d85-a64d-e127c8fe0e98" containerID="7faf98919cf1ce94830ec9853f51cd485e85d8422b5cdca0b02a264d72e5af59" exitCode=0 Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.178454 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gx9kk" event={"ID":"4bba5d5d-5eca-4d85-a64d-e127c8fe0e98","Type":"ContainerDied","Data":"7faf98919cf1ce94830ec9853f51cd485e85d8422b5cdca0b02a264d72e5af59"} Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.184990 4752 generic.go:334] "Generic (PLEG): container finished" podID="0386a68f-2339-4ef6-8d96-e518b0682b4a" containerID="0bec74f3ce8596e2596893804ed9c55064e49bb92c2e7bfa89b7508ac92e80ff" exitCode=0 Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.185054 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4jbd" event={"ID":"0386a68f-2339-4ef6-8d96-e518b0682b4a","Type":"ContainerDied","Data":"0bec74f3ce8596e2596893804ed9c55064e49bb92c2e7bfa89b7508ac92e80ff"} Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.189642 4752 generic.go:334] "Generic (PLEG): container finished" podID="71357b8c-126d-4119-943e-653febd0612d" containerID="3b967c1968d70ece4a0d95ea666394d9a9ae3d92b33dba9effab6ec240e14391" exitCode=0 Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.189695 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x72kv" event={"ID":"71357b8c-126d-4119-943e-653febd0612d","Type":"ContainerDied","Data":"3b967c1968d70ece4a0d95ea666394d9a9ae3d92b33dba9effab6ec240e14391"} Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.192381 4752 generic.go:334] "Generic (PLEG): container finished" podID="3c7b73c6-cc59-400a-858e-85af0b88a5b9" containerID="996cdda472a71efb9ecedd9d5bc44243492189a1d681caab808b37d573a3f902" exitCode=0 Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.192422 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpphf" event={"ID":"3c7b73c6-cc59-400a-858e-85af0b88a5b9","Type":"ContainerDied","Data":"996cdda472a71efb9ecedd9d5bc44243492189a1d681caab808b37d573a3f902"} Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.195026 4752 generic.go:334] "Generic (PLEG): container finished" podID="eb0789a0-f347-4a6f-ba09-cd7cb558d5b5" containerID="546483283ddbe83b198683280058b60265726d2659ff4d9dacb3cb1aa0336bf6" exitCode=0 Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.195065 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x65n2" event={"ID":"eb0789a0-f347-4a6f-ba09-cd7cb558d5b5","Type":"ContainerDied","Data":"546483283ddbe83b198683280058b60265726d2659ff4d9dacb3cb1aa0336bf6"} Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.198158 4752 generic.go:334] "Generic (PLEG): container finished" podID="d7873b3a-ffed-4c96-818c-90117b142098" containerID="9dcea2c6acb7b4c2390a7defdea1ef3c4c04be179397d548862624a19da216e3" exitCode=0 Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.198220 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z4qst" event={"ID":"d7873b3a-ffed-4c96-818c-90117b142098","Type":"ContainerDied","Data":"9dcea2c6acb7b4c2390a7defdea1ef3c4c04be179397d548862624a19da216e3"} Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.201036 4752 generic.go:334] "Generic (PLEG): container finished" podID="41faa779-87e9-41e1-a547-feba13612d57" containerID="3759f70ce942e032865509c98b807fd81c3d23d29562d220cdfb5ad48fb84579" exitCode=0 Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.201065 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dblhf" event={"ID":"41faa779-87e9-41e1-a547-feba13612d57","Type":"ContainerDied","Data":"3759f70ce942e032865509c98b807fd81c3d23d29562d220cdfb5ad48fb84579"} Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.225558 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/565ffec2-a221-48d7-b657-a59c7dff1de1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dx6f6\" (UID: \"565ffec2-a221-48d7-b657-a59c7dff1de1\") " pod="openshift-marketplace/marketplace-operator-79b997595-dx6f6" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.225894 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q7gr\" (UniqueName: \"kubernetes.io/projected/565ffec2-a221-48d7-b657-a59c7dff1de1-kube-api-access-8q7gr\") pod \"marketplace-operator-79b997595-dx6f6\" (UID: \"565ffec2-a221-48d7-b657-a59c7dff1de1\") " pod="openshift-marketplace/marketplace-operator-79b997595-dx6f6" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.225974 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/565ffec2-a221-48d7-b657-a59c7dff1de1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dx6f6\" (UID: \"565ffec2-a221-48d7-b657-a59c7dff1de1\") " pod="openshift-marketplace/marketplace-operator-79b997595-dx6f6" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.227397 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/565ffec2-a221-48d7-b657-a59c7dff1de1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dx6f6\" (UID: \"565ffec2-a221-48d7-b657-a59c7dff1de1\") " pod="openshift-marketplace/marketplace-operator-79b997595-dx6f6" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.234375 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/565ffec2-a221-48d7-b657-a59c7dff1de1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dx6f6\" (UID: \"565ffec2-a221-48d7-b657-a59c7dff1de1\") " pod="openshift-marketplace/marketplace-operator-79b997595-dx6f6" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.239067 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.247624 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q7gr\" (UniqueName: \"kubernetes.io/projected/565ffec2-a221-48d7-b657-a59c7dff1de1-kube-api-access-8q7gr\") pod \"marketplace-operator-79b997595-dx6f6\" (UID: \"565ffec2-a221-48d7-b657-a59c7dff1de1\") " pod="openshift-marketplace/marketplace-operator-79b997595-dx6f6" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.463183 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dx6f6" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.468540 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x72kv" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.478515 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z4qst" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.482373 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x65n2" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.494576 4752 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.495517 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864" gracePeriod=15 Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.495794 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c" gracePeriod=15 Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.495832 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea" gracePeriod=15 Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.495893 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49" gracePeriod=15 Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.496123 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c" gracePeriod=15 Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.501001 4752 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 10:28:27 crc kubenswrapper[4752]: E0122 10:28:27.501514 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.501562 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 10:28:27 crc kubenswrapper[4752]: E0122 10:28:27.501588 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71357b8c-126d-4119-943e-653febd0612d" containerName="registry-server" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.501602 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="71357b8c-126d-4119-943e-653febd0612d" containerName="registry-server" Jan 22 10:28:27 crc kubenswrapper[4752]: E0122 10:28:27.501621 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71357b8c-126d-4119-943e-653febd0612d" containerName="extract-content" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.501633 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="71357b8c-126d-4119-943e-653febd0612d" containerName="extract-content" Jan 22 10:28:27 crc kubenswrapper[4752]: E0122 10:28:27.501647 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7873b3a-ffed-4c96-818c-90117b142098" containerName="registry-server" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.501658 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7873b3a-ffed-4c96-818c-90117b142098" containerName="registry-server" Jan 22 10:28:27 crc kubenswrapper[4752]: E0122 10:28:27.501674 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.501688 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 22 10:28:27 crc kubenswrapper[4752]: E0122 10:28:27.501712 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.501725 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 22 10:28:27 crc kubenswrapper[4752]: E0122 10:28:27.501741 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.501753 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 10:28:27 crc kubenswrapper[4752]: E0122 10:28:27.501768 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7873b3a-ffed-4c96-818c-90117b142098" containerName="extract-content" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.501780 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7873b3a-ffed-4c96-818c-90117b142098" containerName="extract-content" Jan 22 10:28:27 crc kubenswrapper[4752]: E0122 10:28:27.501798 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb0789a0-f347-4a6f-ba09-cd7cb558d5b5" containerName="extract-utilities" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.501809 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb0789a0-f347-4a6f-ba09-cd7cb558d5b5" containerName="extract-utilities" Jan 22 10:28:27 crc kubenswrapper[4752]: E0122 10:28:27.501829 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7873b3a-ffed-4c96-818c-90117b142098" containerName="extract-utilities" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.501841 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7873b3a-ffed-4c96-818c-90117b142098" containerName="extract-utilities" Jan 22 10:28:27 crc kubenswrapper[4752]: E0122 10:28:27.501880 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.501892 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 22 10:28:27 crc kubenswrapper[4752]: E0122 10:28:27.501905 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.501915 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 22 10:28:27 crc kubenswrapper[4752]: E0122 10:28:27.501927 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71357b8c-126d-4119-943e-653febd0612d" containerName="extract-utilities" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.501939 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="71357b8c-126d-4119-943e-653febd0612d" containerName="extract-utilities" Jan 22 10:28:27 crc kubenswrapper[4752]: E0122 10:28:27.501954 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb0789a0-f347-4a6f-ba09-cd7cb558d5b5" containerName="registry-server" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.501965 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb0789a0-f347-4a6f-ba09-cd7cb558d5b5" containerName="registry-server" Jan 22 10:28:27 crc kubenswrapper[4752]: E0122 10:28:27.501984 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb0789a0-f347-4a6f-ba09-cd7cb558d5b5" containerName="extract-content" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.501995 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb0789a0-f347-4a6f-ba09-cd7cb558d5b5" containerName="extract-content" Jan 22 10:28:27 crc kubenswrapper[4752]: E0122 10:28:27.502007 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.502019 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.502183 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="71357b8c-126d-4119-943e-653febd0612d" containerName="registry-server" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.502205 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.502216 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.502227 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.502241 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb0789a0-f347-4a6f-ba09-cd7cb558d5b5" containerName="registry-server" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.502253 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.502268 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7873b3a-ffed-4c96-818c-90117b142098" containerName="registry-server" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.502278 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.502288 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.506488 4752 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.507989 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.510967 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dblhf" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.514722 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gx9kk" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.519284 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4jbd" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.573956 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4r8gj" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.579049 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lpphf" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.632584 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4bba5d5d-5eca-4d85-a64d-e127c8fe0e98-marketplace-trusted-ca\") pod \"4bba5d5d-5eca-4d85-a64d-e127c8fe0e98\" (UID: \"4bba5d5d-5eca-4d85-a64d-e127c8fe0e98\") " Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.632632 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7873b3a-ffed-4c96-818c-90117b142098-utilities\") pod \"d7873b3a-ffed-4c96-818c-90117b142098\" (UID: \"d7873b3a-ffed-4c96-818c-90117b142098\") " Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.632678 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71357b8c-126d-4119-943e-653febd0612d-catalog-content\") pod \"71357b8c-126d-4119-943e-653febd0612d\" (UID: \"71357b8c-126d-4119-943e-653febd0612d\") " Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.632709 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8mp8\" (UniqueName: \"kubernetes.io/projected/eb0789a0-f347-4a6f-ba09-cd7cb558d5b5-kube-api-access-g8mp8\") pod \"eb0789a0-f347-4a6f-ba09-cd7cb558d5b5\" (UID: \"eb0789a0-f347-4a6f-ba09-cd7cb558d5b5\") " Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.632739 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41faa779-87e9-41e1-a547-feba13612d57-catalog-content\") pod \"41faa779-87e9-41e1-a547-feba13612d57\" (UID: \"41faa779-87e9-41e1-a547-feba13612d57\") " Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.632766 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41faa779-87e9-41e1-a547-feba13612d57-utilities\") pod \"41faa779-87e9-41e1-a547-feba13612d57\" (UID: \"41faa779-87e9-41e1-a547-feba13612d57\") " Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.632790 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0386a68f-2339-4ef6-8d96-e518b0682b4a-catalog-content\") pod \"0386a68f-2339-4ef6-8d96-e518b0682b4a\" (UID: \"0386a68f-2339-4ef6-8d96-e518b0682b4a\") " Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.632814 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xks7\" (UniqueName: \"kubernetes.io/projected/71357b8c-126d-4119-943e-653febd0612d-kube-api-access-7xks7\") pod \"71357b8c-126d-4119-943e-653febd0612d\" (UID: \"71357b8c-126d-4119-943e-653febd0612d\") " Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.632866 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb0789a0-f347-4a6f-ba09-cd7cb558d5b5-utilities\") pod \"eb0789a0-f347-4a6f-ba09-cd7cb558d5b5\" (UID: \"eb0789a0-f347-4a6f-ba09-cd7cb558d5b5\") " Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.632892 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0386a68f-2339-4ef6-8d96-e518b0682b4a-utilities\") pod \"0386a68f-2339-4ef6-8d96-e518b0682b4a\" (UID: \"0386a68f-2339-4ef6-8d96-e518b0682b4a\") " Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.632924 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7873b3a-ffed-4c96-818c-90117b142098-catalog-content\") pod \"d7873b3a-ffed-4c96-818c-90117b142098\" (UID: \"d7873b3a-ffed-4c96-818c-90117b142098\") " Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.632954 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb0789a0-f347-4a6f-ba09-cd7cb558d5b5-catalog-content\") pod \"eb0789a0-f347-4a6f-ba09-cd7cb558d5b5\" (UID: \"eb0789a0-f347-4a6f-ba09-cd7cb558d5b5\") " Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.632979 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4bba5d5d-5eca-4d85-a64d-e127c8fe0e98-marketplace-operator-metrics\") pod \"4bba5d5d-5eca-4d85-a64d-e127c8fe0e98\" (UID: \"4bba5d5d-5eca-4d85-a64d-e127c8fe0e98\") " Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.633003 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25npf\" (UniqueName: \"kubernetes.io/projected/4bba5d5d-5eca-4d85-a64d-e127c8fe0e98-kube-api-access-25npf\") pod \"4bba5d5d-5eca-4d85-a64d-e127c8fe0e98\" (UID: \"4bba5d5d-5eca-4d85-a64d-e127c8fe0e98\") " Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.633037 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71357b8c-126d-4119-943e-653febd0612d-utilities\") pod \"71357b8c-126d-4119-943e-653febd0612d\" (UID: \"71357b8c-126d-4119-943e-653febd0612d\") " Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.633088 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2w2s\" (UniqueName: \"kubernetes.io/projected/41faa779-87e9-41e1-a547-feba13612d57-kube-api-access-d2w2s\") pod \"41faa779-87e9-41e1-a547-feba13612d57\" (UID: \"41faa779-87e9-41e1-a547-feba13612d57\") " Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.633112 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ct8vw\" (UniqueName: \"kubernetes.io/projected/d7873b3a-ffed-4c96-818c-90117b142098-kube-api-access-ct8vw\") pod \"d7873b3a-ffed-4c96-818c-90117b142098\" (UID: \"d7873b3a-ffed-4c96-818c-90117b142098\") " Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.633143 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh52l\" (UniqueName: \"kubernetes.io/projected/0386a68f-2339-4ef6-8d96-e518b0682b4a-kube-api-access-mh52l\") pod \"0386a68f-2339-4ef6-8d96-e518b0682b4a\" (UID: \"0386a68f-2339-4ef6-8d96-e518b0682b4a\") " Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.633326 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.633353 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.633377 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.633403 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.633461 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.633489 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.633521 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.633559 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.634762 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bba5d5d-5eca-4d85-a64d-e127c8fe0e98-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "4bba5d5d-5eca-4d85-a64d-e127c8fe0e98" (UID: "4bba5d5d-5eca-4d85-a64d-e127c8fe0e98"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.634911 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71357b8c-126d-4119-943e-653febd0612d-utilities" (OuterVolumeSpecName: "utilities") pod "71357b8c-126d-4119-943e-653febd0612d" (UID: "71357b8c-126d-4119-943e-653febd0612d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.635436 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7873b3a-ffed-4c96-818c-90117b142098-utilities" (OuterVolumeSpecName: "utilities") pod "d7873b3a-ffed-4c96-818c-90117b142098" (UID: "d7873b3a-ffed-4c96-818c-90117b142098"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.636081 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb0789a0-f347-4a6f-ba09-cd7cb558d5b5-utilities" (OuterVolumeSpecName: "utilities") pod "eb0789a0-f347-4a6f-ba09-cd7cb558d5b5" (UID: "eb0789a0-f347-4a6f-ba09-cd7cb558d5b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.636098 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41faa779-87e9-41e1-a547-feba13612d57-utilities" (OuterVolumeSpecName: "utilities") pod "41faa779-87e9-41e1-a547-feba13612d57" (UID: "41faa779-87e9-41e1-a547-feba13612d57"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.637105 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0386a68f-2339-4ef6-8d96-e518b0682b4a-utilities" (OuterVolumeSpecName: "utilities") pod "0386a68f-2339-4ef6-8d96-e518b0682b4a" (UID: "0386a68f-2339-4ef6-8d96-e518b0682b4a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.639403 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb0789a0-f347-4a6f-ba09-cd7cb558d5b5-kube-api-access-g8mp8" (OuterVolumeSpecName: "kube-api-access-g8mp8") pod "eb0789a0-f347-4a6f-ba09-cd7cb558d5b5" (UID: "eb0789a0-f347-4a6f-ba09-cd7cb558d5b5"). InnerVolumeSpecName "kube-api-access-g8mp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.639884 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bba5d5d-5eca-4d85-a64d-e127c8fe0e98-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "4bba5d5d-5eca-4d85-a64d-e127c8fe0e98" (UID: "4bba5d5d-5eca-4d85-a64d-e127c8fe0e98"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.643252 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7873b3a-ffed-4c96-818c-90117b142098-kube-api-access-ct8vw" (OuterVolumeSpecName: "kube-api-access-ct8vw") pod "d7873b3a-ffed-4c96-818c-90117b142098" (UID: "d7873b3a-ffed-4c96-818c-90117b142098"). InnerVolumeSpecName "kube-api-access-ct8vw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.643519 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41faa779-87e9-41e1-a547-feba13612d57-kube-api-access-d2w2s" (OuterVolumeSpecName: "kube-api-access-d2w2s") pod "41faa779-87e9-41e1-a547-feba13612d57" (UID: "41faa779-87e9-41e1-a547-feba13612d57"). InnerVolumeSpecName "kube-api-access-d2w2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.643627 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bba5d5d-5eca-4d85-a64d-e127c8fe0e98-kube-api-access-25npf" (OuterVolumeSpecName: "kube-api-access-25npf") pod "4bba5d5d-5eca-4d85-a64d-e127c8fe0e98" (UID: "4bba5d5d-5eca-4d85-a64d-e127c8fe0e98"). InnerVolumeSpecName "kube-api-access-25npf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.643978 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0386a68f-2339-4ef6-8d96-e518b0682b4a-kube-api-access-mh52l" (OuterVolumeSpecName: "kube-api-access-mh52l") pod "0386a68f-2339-4ef6-8d96-e518b0682b4a" (UID: "0386a68f-2339-4ef6-8d96-e518b0682b4a"). InnerVolumeSpecName "kube-api-access-mh52l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.660041 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71357b8c-126d-4119-943e-653febd0612d-kube-api-access-7xks7" (OuterVolumeSpecName: "kube-api-access-7xks7") pod "71357b8c-126d-4119-943e-653febd0612d" (UID: "71357b8c-126d-4119-943e-653febd0612d"). InnerVolumeSpecName "kube-api-access-7xks7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.668273 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41faa779-87e9-41e1-a547-feba13612d57-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41faa779-87e9-41e1-a547-feba13612d57" (UID: "41faa779-87e9-41e1-a547-feba13612d57"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.673782 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0386a68f-2339-4ef6-8d96-e518b0682b4a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0386a68f-2339-4ef6-8d96-e518b0682b4a" (UID: "0386a68f-2339-4ef6-8d96-e518b0682b4a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.704988 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71357b8c-126d-4119-943e-653febd0612d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71357b8c-126d-4119-943e-653febd0612d" (UID: "71357b8c-126d-4119-943e-653febd0612d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.724691 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.724784 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:28:27 crc kubenswrapper[4752]: E0122 10:28:27.727681 4752 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events/machine-config-daemon-v6hm8.188d06c660d4581b\": dial tcp 38.129.56.67:6443: connect: connection refused" event=< Jan 22 10:28:27 crc kubenswrapper[4752]: &Event{ObjectMeta:{machine-config-daemon-v6hm8.188d06c660d4581b openshift-machine-config-operator 29214 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:machine-config-daemon-v6hm8,UID:eb8df70c-9474-4827-8831-f39fc6883d79,APIVersion:v1,ResourceVersion:26580,FieldPath:spec.containers{machine-config-daemon},},Reason:ProbeError,Message:Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused Jan 22 10:28:27 crc kubenswrapper[4752]: body: Jan 22 10:28:27 crc kubenswrapper[4752]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 10:27:57 +0000 UTC,LastTimestamp:2026-01-22 10:28:27.72475038 +0000 UTC m=+186.954693288,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 22 10:28:27 crc kubenswrapper[4752]: > Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.734789 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c7b73c6-cc59-400a-858e-85af0b88a5b9-utilities\") pod \"3c7b73c6-cc59-400a-858e-85af0b88a5b9\" (UID: \"3c7b73c6-cc59-400a-858e-85af0b88a5b9\") " Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.734843 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c7b73c6-cc59-400a-858e-85af0b88a5b9-catalog-content\") pod \"3c7b73c6-cc59-400a-858e-85af0b88a5b9\" (UID: \"3c7b73c6-cc59-400a-858e-85af0b88a5b9\") " Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.734890 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d035039-51b4-41aa-9e33-db0a1ca24332-utilities\") pod \"0d035039-51b4-41aa-9e33-db0a1ca24332\" (UID: \"0d035039-51b4-41aa-9e33-db0a1ca24332\") " Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.734915 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfz4s\" (UniqueName: \"kubernetes.io/projected/3c7b73c6-cc59-400a-858e-85af0b88a5b9-kube-api-access-vfz4s\") pod \"3c7b73c6-cc59-400a-858e-85af0b88a5b9\" (UID: \"3c7b73c6-cc59-400a-858e-85af0b88a5b9\") " Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.734959 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kttwz\" (UniqueName: \"kubernetes.io/projected/0d035039-51b4-41aa-9e33-db0a1ca24332-kube-api-access-kttwz\") pod \"0d035039-51b4-41aa-9e33-db0a1ca24332\" (UID: \"0d035039-51b4-41aa-9e33-db0a1ca24332\") " Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.734976 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d035039-51b4-41aa-9e33-db0a1ca24332-catalog-content\") pod \"0d035039-51b4-41aa-9e33-db0a1ca24332\" (UID: \"0d035039-51b4-41aa-9e33-db0a1ca24332\") " Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.735145 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.735198 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.735214 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.735233 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.735254 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.735279 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.735299 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.735301 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.735318 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.735353 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.735456 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.735479 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.735495 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.735512 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.735527 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.735543 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.735560 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71357b8c-126d-4119-943e-653febd0612d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.735579 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8mp8\" (UniqueName: \"kubernetes.io/projected/eb0789a0-f347-4a6f-ba09-cd7cb558d5b5-kube-api-access-g8mp8\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.735591 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41faa779-87e9-41e1-a547-feba13612d57-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.735602 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41faa779-87e9-41e1-a547-feba13612d57-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.735611 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0386a68f-2339-4ef6-8d96-e518b0682b4a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.735621 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xks7\" (UniqueName: \"kubernetes.io/projected/71357b8c-126d-4119-943e-653febd0612d-kube-api-access-7xks7\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.735630 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0386a68f-2339-4ef6-8d96-e518b0682b4a-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.735638 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb0789a0-f347-4a6f-ba09-cd7cb558d5b5-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.735625 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d035039-51b4-41aa-9e33-db0a1ca24332-utilities" (OuterVolumeSpecName: "utilities") pod "0d035039-51b4-41aa-9e33-db0a1ca24332" (UID: "0d035039-51b4-41aa-9e33-db0a1ca24332"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.735649 4752 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4bba5d5d-5eca-4d85-a64d-e127c8fe0e98-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.735713 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25npf\" (UniqueName: \"kubernetes.io/projected/4bba5d5d-5eca-4d85-a64d-e127c8fe0e98-kube-api-access-25npf\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.735728 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71357b8c-126d-4119-943e-653febd0612d-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.735743 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2w2s\" (UniqueName: \"kubernetes.io/projected/41faa779-87e9-41e1-a547-feba13612d57-kube-api-access-d2w2s\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.735755 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ct8vw\" (UniqueName: \"kubernetes.io/projected/d7873b3a-ffed-4c96-818c-90117b142098-kube-api-access-ct8vw\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.735769 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mh52l\" (UniqueName: \"kubernetes.io/projected/0386a68f-2339-4ef6-8d96-e518b0682b4a-kube-api-access-mh52l\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.735782 4752 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4bba5d5d-5eca-4d85-a64d-e127c8fe0e98-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.735795 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7873b3a-ffed-4c96-818c-90117b142098-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.737219 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c7b73c6-cc59-400a-858e-85af0b88a5b9-utilities" (OuterVolumeSpecName: "utilities") pod "3c7b73c6-cc59-400a-858e-85af0b88a5b9" (UID: "3c7b73c6-cc59-400a-858e-85af0b88a5b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.756052 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d035039-51b4-41aa-9e33-db0a1ca24332-kube-api-access-kttwz" (OuterVolumeSpecName: "kube-api-access-kttwz") pod "0d035039-51b4-41aa-9e33-db0a1ca24332" (UID: "0d035039-51b4-41aa-9e33-db0a1ca24332"). InnerVolumeSpecName "kube-api-access-kttwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.764721 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c7b73c6-cc59-400a-858e-85af0b88a5b9-kube-api-access-vfz4s" (OuterVolumeSpecName: "kube-api-access-vfz4s") pod "3c7b73c6-cc59-400a-858e-85af0b88a5b9" (UID: "3c7b73c6-cc59-400a-858e-85af0b88a5b9"). InnerVolumeSpecName "kube-api-access-vfz4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.837424 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c7b73c6-cc59-400a-858e-85af0b88a5b9-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.837461 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d035039-51b4-41aa-9e33-db0a1ca24332-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.837473 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfz4s\" (UniqueName: \"kubernetes.io/projected/3c7b73c6-cc59-400a-858e-85af0b88a5b9-kube-api-access-vfz4s\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:27 crc kubenswrapper[4752]: I0122 10:28:27.837486 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kttwz\" (UniqueName: \"kubernetes.io/projected/0d035039-51b4-41aa-9e33-db0a1ca24332-kube-api-access-kttwz\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.182076 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb0789a0-f347-4a6f-ba09-cd7cb558d5b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb0789a0-f347-4a6f-ba09-cd7cb558d5b5" (UID: "eb0789a0-f347-4a6f-ba09-cd7cb558d5b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:28:28 crc kubenswrapper[4752]: E0122 10:28:28.197121 4752 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 22 10:28:28 crc kubenswrapper[4752]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-79b997595-dx6f6_openshift-marketplace_565ffec2-a221-48d7-b657-a59c7dff1de1_0(431be875a3c3fd5e7b0c6f6b7a1aab77d26ca26a81492cf31ada7a9b5d046949): error adding pod openshift-marketplace_marketplace-operator-79b997595-dx6f6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"431be875a3c3fd5e7b0c6f6b7a1aab77d26ca26a81492cf31ada7a9b5d046949" Netns:"/var/run/netns/1ef3c49f-f927-4fb1-8530-abccd824dfb4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-79b997595-dx6f6;K8S_POD_INFRA_CONTAINER_ID=431be875a3c3fd5e7b0c6f6b7a1aab77d26ca26a81492cf31ada7a9b5d046949;K8S_POD_UID=565ffec2-a221-48d7-b657-a59c7dff1de1" Path:"" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-79b997595-dx6f6] networking: Multus: [openshift-marketplace/marketplace-operator-79b997595-dx6f6/565ffec2-a221-48d7-b657-a59c7dff1de1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-79b997595-dx6f6 in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-79b997595-dx6f6 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-dx6f6?timeout=1m0s": dial tcp 38.129.56.67:6443: connect: connection refused Jan 22 10:28:28 crc kubenswrapper[4752]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 22 10:28:28 crc kubenswrapper[4752]: > Jan 22 10:28:28 crc kubenswrapper[4752]: E0122 10:28:28.197217 4752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 22 10:28:28 crc kubenswrapper[4752]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-79b997595-dx6f6_openshift-marketplace_565ffec2-a221-48d7-b657-a59c7dff1de1_0(431be875a3c3fd5e7b0c6f6b7a1aab77d26ca26a81492cf31ada7a9b5d046949): error adding pod openshift-marketplace_marketplace-operator-79b997595-dx6f6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"431be875a3c3fd5e7b0c6f6b7a1aab77d26ca26a81492cf31ada7a9b5d046949" Netns:"/var/run/netns/1ef3c49f-f927-4fb1-8530-abccd824dfb4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-79b997595-dx6f6;K8S_POD_INFRA_CONTAINER_ID=431be875a3c3fd5e7b0c6f6b7a1aab77d26ca26a81492cf31ada7a9b5d046949;K8S_POD_UID=565ffec2-a221-48d7-b657-a59c7dff1de1" Path:"" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-79b997595-dx6f6] networking: Multus: [openshift-marketplace/marketplace-operator-79b997595-dx6f6/565ffec2-a221-48d7-b657-a59c7dff1de1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-79b997595-dx6f6 in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-79b997595-dx6f6 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-dx6f6?timeout=1m0s": dial tcp 38.129.56.67:6443: connect: connection refused Jan 22 10:28:28 crc kubenswrapper[4752]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 22 10:28:28 crc kubenswrapper[4752]: > pod="openshift-marketplace/marketplace-operator-79b997595-dx6f6" Jan 22 10:28:28 crc kubenswrapper[4752]: E0122 10:28:28.197235 4752 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 22 10:28:28 crc kubenswrapper[4752]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-79b997595-dx6f6_openshift-marketplace_565ffec2-a221-48d7-b657-a59c7dff1de1_0(431be875a3c3fd5e7b0c6f6b7a1aab77d26ca26a81492cf31ada7a9b5d046949): error adding pod openshift-marketplace_marketplace-operator-79b997595-dx6f6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"431be875a3c3fd5e7b0c6f6b7a1aab77d26ca26a81492cf31ada7a9b5d046949" Netns:"/var/run/netns/1ef3c49f-f927-4fb1-8530-abccd824dfb4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-79b997595-dx6f6;K8S_POD_INFRA_CONTAINER_ID=431be875a3c3fd5e7b0c6f6b7a1aab77d26ca26a81492cf31ada7a9b5d046949;K8S_POD_UID=565ffec2-a221-48d7-b657-a59c7dff1de1" Path:"" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-79b997595-dx6f6] networking: Multus: [openshift-marketplace/marketplace-operator-79b997595-dx6f6/565ffec2-a221-48d7-b657-a59c7dff1de1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-79b997595-dx6f6 in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-79b997595-dx6f6 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-dx6f6?timeout=1m0s": dial tcp 38.129.56.67:6443: connect: connection refused Jan 22 10:28:28 crc kubenswrapper[4752]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 22 10:28:28 crc kubenswrapper[4752]: > pod="openshift-marketplace/marketplace-operator-79b997595-dx6f6" Jan 22 10:28:28 crc kubenswrapper[4752]: E0122 10:28:28.197306 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"marketplace-operator-79b997595-dx6f6_openshift-marketplace(565ffec2-a221-48d7-b657-a59c7dff1de1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"marketplace-operator-79b997595-dx6f6_openshift-marketplace(565ffec2-a221-48d7-b657-a59c7dff1de1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-79b997595-dx6f6_openshift-marketplace_565ffec2-a221-48d7-b657-a59c7dff1de1_0(431be875a3c3fd5e7b0c6f6b7a1aab77d26ca26a81492cf31ada7a9b5d046949): error adding pod openshift-marketplace_marketplace-operator-79b997595-dx6f6 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"431be875a3c3fd5e7b0c6f6b7a1aab77d26ca26a81492cf31ada7a9b5d046949\\\" Netns:\\\"/var/run/netns/1ef3c49f-f927-4fb1-8530-abccd824dfb4\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-79b997595-dx6f6;K8S_POD_INFRA_CONTAINER_ID=431be875a3c3fd5e7b0c6f6b7a1aab77d26ca26a81492cf31ada7a9b5d046949;K8S_POD_UID=565ffec2-a221-48d7-b657-a59c7dff1de1\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-79b997595-dx6f6] networking: Multus: [openshift-marketplace/marketplace-operator-79b997595-dx6f6/565ffec2-a221-48d7-b657-a59c7dff1de1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-79b997595-dx6f6 in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-79b997595-dx6f6 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-dx6f6?timeout=1m0s\\\": dial tcp 38.129.56.67:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-marketplace/marketplace-operator-79b997595-dx6f6" podUID="565ffec2-a221-48d7-b657-a59c7dff1de1" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.213066 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4jbd" event={"ID":"0386a68f-2339-4ef6-8d96-e518b0682b4a","Type":"ContainerDied","Data":"9d8df439680c8c9761b501b3a9768377966afc10b11cb698c7046a604de2d22c"} Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.213135 4752 scope.go:117] "RemoveContainer" containerID="0bec74f3ce8596e2596893804ed9c55064e49bb92c2e7bfa89b7508ac92e80ff" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.213140 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4jbd" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.217832 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lpphf" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.217836 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpphf" event={"ID":"3c7b73c6-cc59-400a-858e-85af0b88a5b9","Type":"ContainerDied","Data":"4aba645354fed7326071594b01ef8f5340f92c821f3add9b71b3598beed47ebf"} Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.224308 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z4qst" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.224318 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z4qst" event={"ID":"d7873b3a-ffed-4c96-818c-90117b142098","Type":"ContainerDied","Data":"0d7212c67dc318051af19061f5ee49d7657da362294f7144ef5ba650b2530fce"} Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.227025 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dblhf" event={"ID":"41faa779-87e9-41e1-a547-feba13612d57","Type":"ContainerDied","Data":"4ac89ac67994d4946eb9d799b6e458133c9b6c710a8b8d67c30e9b06dda22e58"} Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.227052 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dblhf" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.230809 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4r8gj" event={"ID":"0d035039-51b4-41aa-9e33-db0a1ca24332","Type":"ContainerDied","Data":"7fc145b0f3a6b9e1ea9ca0e6f4b916ea75905dc7316279fb15663dbfb76b5162"} Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.230907 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4r8gj" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.232156 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x72kv" event={"ID":"71357b8c-126d-4119-943e-653febd0612d","Type":"ContainerDied","Data":"51ceae9984997bc02fe267c283d7e785a03733b91c1976afd2af77472d8af0ea"} Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.232202 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x72kv" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.234762 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7873b3a-ffed-4c96-818c-90117b142098-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7873b3a-ffed-4c96-818c-90117b142098" (UID: "d7873b3a-ffed-4c96-818c-90117b142098"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.234927 4752 scope.go:117] "RemoveContainer" containerID="2b31dc47d0f37e6e7aa0e57f4bc3507bb929ad37d69d769c0aa8b52117cd6949" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.241612 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x65n2" event={"ID":"eb0789a0-f347-4a6f-ba09-cd7cb558d5b5","Type":"ContainerDied","Data":"730773af0c3413a4408760b47b559e86b1d28493aed5d97332093fe116dbfb8b"} Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.241717 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x65n2" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.243831 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7873b3a-ffed-4c96-818c-90117b142098-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.244170 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb0789a0-f347-4a6f-ba09-cd7cb558d5b5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.246677 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.248723 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.249350 4752 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c" exitCode=0 Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.249401 4752 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49" exitCode=0 Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.249415 4752 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea" exitCode=0 Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.249427 4752 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c" exitCode=2 Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.250779 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dx6f6" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.251288 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dx6f6" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.251368 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gx9kk" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.251355 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gx9kk" event={"ID":"4bba5d5d-5eca-4d85-a64d-e127c8fe0e98","Type":"ContainerDied","Data":"f406d88a768fb1ced4958a4463387aa1484bdc5bad568bef53f733ec8502eed2"} Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.263116 4752 scope.go:117] "RemoveContainer" containerID="132a6a2a723ebce88cbfda905db99b3d5e1070bec2d43a90dffd50c52e4c5eda" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.304091 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c7b73c6-cc59-400a-858e-85af0b88a5b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c7b73c6-cc59-400a-858e-85af0b88a5b9" (UID: "3c7b73c6-cc59-400a-858e-85af0b88a5b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.345352 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c7b73c6-cc59-400a-858e-85af0b88a5b9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.357636 4752 scope.go:117] "RemoveContainer" containerID="996cdda472a71efb9ecedd9d5bc44243492189a1d681caab808b37d573a3f902" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.385169 4752 scope.go:117] "RemoveContainer" containerID="180e7716eb53c6ef72eb13c8061b8c9066fc56434fb92e8e07a32666258f1e45" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.391622 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d035039-51b4-41aa-9e33-db0a1ca24332-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d035039-51b4-41aa-9e33-db0a1ca24332" (UID: "0d035039-51b4-41aa-9e33-db0a1ca24332"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.400137 4752 scope.go:117] "RemoveContainer" containerID="459027283f874c289123e268d36137c023d75ae92459ceefc46b75caf690324f" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.422313 4752 scope.go:117] "RemoveContainer" containerID="9dcea2c6acb7b4c2390a7defdea1ef3c4c04be179397d548862624a19da216e3" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.435741 4752 scope.go:117] "RemoveContainer" containerID="a0bc945ba63b06e35aa199c887938a8032e87e0d12949f86cd65df9ab560e852" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.446292 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d035039-51b4-41aa-9e33-db0a1ca24332-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.450092 4752 scope.go:117] "RemoveContainer" containerID="df2c4e057d281943bd64261690ad5ca07f35cf5631e8de0003bb27c813ef6647" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.474504 4752 scope.go:117] "RemoveContainer" containerID="3759f70ce942e032865509c98b807fd81c3d23d29562d220cdfb5ad48fb84579" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.499183 4752 scope.go:117] "RemoveContainer" containerID="2a4e26a274b6134b193c177d159564b749dab10fc7e984d2747ef2bc60b23d6d" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.514555 4752 scope.go:117] "RemoveContainer" containerID="d973aeaa0d015db93080dcd87e209bf37effaf5cfa39a2fd588f7a40586e5403" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.538135 4752 scope.go:117] "RemoveContainer" containerID="ecf84d21641b383b9ecf1c8911e63d55aa2c977315ca6d29ac320d75d33bae49" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.565991 4752 scope.go:117] "RemoveContainer" containerID="9c357e0e210f3b1838c82e71516ade6ab4ade350c681e700976d64244d5b39de" Jan 22 10:28:28 crc kubenswrapper[4752]: E0122 10:28:28.569959 4752 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 22 10:28:28 crc kubenswrapper[4752]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-79b997595-dx6f6_openshift-marketplace_565ffec2-a221-48d7-b657-a59c7dff1de1_0(5de60b65fe8b344255e41e276bec8c99cd483a8538e8399af53b29e55a6db154): error adding pod openshift-marketplace_marketplace-operator-79b997595-dx6f6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5de60b65fe8b344255e41e276bec8c99cd483a8538e8399af53b29e55a6db154" Netns:"/var/run/netns/79983f28-2ebe-4e94-8fcd-185bdb3ddacd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-79b997595-dx6f6;K8S_POD_INFRA_CONTAINER_ID=5de60b65fe8b344255e41e276bec8c99cd483a8538e8399af53b29e55a6db154;K8S_POD_UID=565ffec2-a221-48d7-b657-a59c7dff1de1" Path:"" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-79b997595-dx6f6] networking: Multus: [openshift-marketplace/marketplace-operator-79b997595-dx6f6/565ffec2-a221-48d7-b657-a59c7dff1de1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-79b997595-dx6f6 in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-79b997595-dx6f6 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-dx6f6?timeout=1m0s": dial tcp 38.129.56.67:6443: connect: connection refused Jan 22 10:28:28 crc kubenswrapper[4752]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 22 10:28:28 crc kubenswrapper[4752]: > Jan 22 10:28:28 crc kubenswrapper[4752]: E0122 10:28:28.570029 4752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 22 10:28:28 crc kubenswrapper[4752]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-79b997595-dx6f6_openshift-marketplace_565ffec2-a221-48d7-b657-a59c7dff1de1_0(5de60b65fe8b344255e41e276bec8c99cd483a8538e8399af53b29e55a6db154): error adding pod openshift-marketplace_marketplace-operator-79b997595-dx6f6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5de60b65fe8b344255e41e276bec8c99cd483a8538e8399af53b29e55a6db154" Netns:"/var/run/netns/79983f28-2ebe-4e94-8fcd-185bdb3ddacd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-79b997595-dx6f6;K8S_POD_INFRA_CONTAINER_ID=5de60b65fe8b344255e41e276bec8c99cd483a8538e8399af53b29e55a6db154;K8S_POD_UID=565ffec2-a221-48d7-b657-a59c7dff1de1" Path:"" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-79b997595-dx6f6] networking: Multus: [openshift-marketplace/marketplace-operator-79b997595-dx6f6/565ffec2-a221-48d7-b657-a59c7dff1de1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-79b997595-dx6f6 in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-79b997595-dx6f6 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-dx6f6?timeout=1m0s": dial tcp 38.129.56.67:6443: connect: connection refused Jan 22 10:28:28 crc kubenswrapper[4752]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 22 10:28:28 crc kubenswrapper[4752]: > pod="openshift-marketplace/marketplace-operator-79b997595-dx6f6" Jan 22 10:28:28 crc kubenswrapper[4752]: E0122 10:28:28.570051 4752 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 22 10:28:28 crc kubenswrapper[4752]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-79b997595-dx6f6_openshift-marketplace_565ffec2-a221-48d7-b657-a59c7dff1de1_0(5de60b65fe8b344255e41e276bec8c99cd483a8538e8399af53b29e55a6db154): error adding pod openshift-marketplace_marketplace-operator-79b997595-dx6f6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5de60b65fe8b344255e41e276bec8c99cd483a8538e8399af53b29e55a6db154" Netns:"/var/run/netns/79983f28-2ebe-4e94-8fcd-185bdb3ddacd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-79b997595-dx6f6;K8S_POD_INFRA_CONTAINER_ID=5de60b65fe8b344255e41e276bec8c99cd483a8538e8399af53b29e55a6db154;K8S_POD_UID=565ffec2-a221-48d7-b657-a59c7dff1de1" Path:"" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-79b997595-dx6f6] networking: Multus: [openshift-marketplace/marketplace-operator-79b997595-dx6f6/565ffec2-a221-48d7-b657-a59c7dff1de1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-79b997595-dx6f6 in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-79b997595-dx6f6 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-dx6f6?timeout=1m0s": dial tcp 38.129.56.67:6443: connect: connection refused Jan 22 10:28:28 crc kubenswrapper[4752]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 22 10:28:28 crc kubenswrapper[4752]: > pod="openshift-marketplace/marketplace-operator-79b997595-dx6f6" Jan 22 10:28:28 crc kubenswrapper[4752]: E0122 10:28:28.570113 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"marketplace-operator-79b997595-dx6f6_openshift-marketplace(565ffec2-a221-48d7-b657-a59c7dff1de1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"marketplace-operator-79b997595-dx6f6_openshift-marketplace(565ffec2-a221-48d7-b657-a59c7dff1de1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-79b997595-dx6f6_openshift-marketplace_565ffec2-a221-48d7-b657-a59c7dff1de1_0(5de60b65fe8b344255e41e276bec8c99cd483a8538e8399af53b29e55a6db154): error adding pod openshift-marketplace_marketplace-operator-79b997595-dx6f6 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"5de60b65fe8b344255e41e276bec8c99cd483a8538e8399af53b29e55a6db154\\\" Netns:\\\"/var/run/netns/79983f28-2ebe-4e94-8fcd-185bdb3ddacd\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-79b997595-dx6f6;K8S_POD_INFRA_CONTAINER_ID=5de60b65fe8b344255e41e276bec8c99cd483a8538e8399af53b29e55a6db154;K8S_POD_UID=565ffec2-a221-48d7-b657-a59c7dff1de1\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-79b997595-dx6f6] networking: Multus: [openshift-marketplace/marketplace-operator-79b997595-dx6f6/565ffec2-a221-48d7-b657-a59c7dff1de1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-79b997595-dx6f6 in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-79b997595-dx6f6 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-dx6f6?timeout=1m0s\\\": dial tcp 38.129.56.67:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-marketplace/marketplace-operator-79b997595-dx6f6" podUID="565ffec2-a221-48d7-b657-a59c7dff1de1" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.578449 4752 scope.go:117] "RemoveContainer" containerID="b13532d54c31a5b054a1343ac712ca0cb6240d5f613442015685e2414c5640a5" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.589286 4752 scope.go:117] "RemoveContainer" containerID="3b967c1968d70ece4a0d95ea666394d9a9ae3d92b33dba9effab6ec240e14391" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.599126 4752 scope.go:117] "RemoveContainer" containerID="02bc51a2f7aab3ebc65d682a36d0a264b44088c971b43715d548f5d39db4ee12" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.619077 4752 scope.go:117] "RemoveContainer" containerID="1ee51ca0faf1552304cb6fd988a449d37852204f887a0a1af2cba0398e1ee87d" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.631709 4752 scope.go:117] "RemoveContainer" containerID="546483283ddbe83b198683280058b60265726d2659ff4d9dacb3cb1aa0336bf6" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.642917 4752 scope.go:117] "RemoveContainer" containerID="8b7949b92af25723b2556d2ddd46a319eeaaea1544ffe90a83189161d21f742b" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.654701 4752 scope.go:117] "RemoveContainer" containerID="2e4df822c558e9e480ae076f0c369ec310014cb14c797433a472d24866b348b2" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.671448 4752 scope.go:117] "RemoveContainer" containerID="7a68b324fa2d964ea80ddbda36b47d54afb7cb9d3fa2ec1b5d72982528e70c6a" Jan 22 10:28:28 crc kubenswrapper[4752]: I0122 10:28:28.700245 4752 scope.go:117] "RemoveContainer" containerID="7faf98919cf1ce94830ec9853f51cd485e85d8422b5cdca0b02a264d72e5af59" Jan 22 10:28:29 crc kubenswrapper[4752]: I0122 10:28:29.082365 4752 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 22 10:28:29 crc kubenswrapper[4752]: I0122 10:28:29.082425 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 22 10:28:29 crc kubenswrapper[4752]: I0122 10:28:29.271815 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 10:28:29 crc kubenswrapper[4752]: I0122 10:28:29.278425 4752 generic.go:334] "Generic (PLEG): container finished" podID="512fd0f5-4e67-429d-abe3-7eea327491ee" containerID="b3fa775059178e3bba58e652005cc6128692a4b3b536cfb69e936e119a8bf562" exitCode=0 Jan 22 10:28:29 crc kubenswrapper[4752]: I0122 10:28:29.278474 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"512fd0f5-4e67-429d-abe3-7eea327491ee","Type":"ContainerDied","Data":"b3fa775059178e3bba58e652005cc6128692a4b3b536cfb69e936e119a8bf562"} Jan 22 10:28:29 crc kubenswrapper[4752]: E0122 10:28:29.481014 4752 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events/machine-config-daemon-v6hm8.188d06c660d4581b\": dial tcp 38.129.56.67:6443: connect: connection refused" event=< Jan 22 10:28:29 crc kubenswrapper[4752]: &Event{ObjectMeta:{machine-config-daemon-v6hm8.188d06c660d4581b openshift-machine-config-operator 29214 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:machine-config-daemon-v6hm8,UID:eb8df70c-9474-4827-8831-f39fc6883d79,APIVersion:v1,ResourceVersion:26580,FieldPath:spec.containers{machine-config-daemon},},Reason:ProbeError,Message:Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused Jan 22 10:28:29 crc kubenswrapper[4752]: body: Jan 22 10:28:29 crc kubenswrapper[4752]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 10:27:57 +0000 UTC,LastTimestamp:2026-01-22 10:28:27.72475038 +0000 UTC m=+186.954693288,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 22 10:28:29 crc kubenswrapper[4752]: > Jan 22 10:28:29 crc kubenswrapper[4752]: I0122 10:28:29.967402 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 10:28:29 crc kubenswrapper[4752]: I0122 10:28:29.969077 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.065798 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.065987 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.066414 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.066287 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.066573 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.067044 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.067284 4752 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.067566 4752 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.168900 4752 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.290571 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.291440 4752 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864" exitCode=0 Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.291546 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.291569 4752 scope.go:117] "RemoveContainer" containerID="6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c" Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.314656 4752 scope.go:117] "RemoveContainer" containerID="924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49" Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.340116 4752 scope.go:117] "RemoveContainer" containerID="eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea" Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.355684 4752 scope.go:117] "RemoveContainer" containerID="a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c" Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.369906 4752 scope.go:117] "RemoveContainer" containerID="7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864" Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.385652 4752 scope.go:117] "RemoveContainer" containerID="8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c" Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.415430 4752 scope.go:117] "RemoveContainer" containerID="6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c" Jan 22 10:28:30 crc kubenswrapper[4752]: E0122 10:28:30.416712 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\": container with ID starting with 6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c not found: ID does not exist" containerID="6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c" Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.416774 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c"} err="failed to get container status \"6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\": rpc error: code = NotFound desc = could not find container \"6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c\": container with ID starting with 6287f360fa6d66a99e263e1227ae34a103bd233217a42eaaee9ff1ce2871657c not found: ID does not exist" Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.416841 4752 scope.go:117] "RemoveContainer" containerID="924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49" Jan 22 10:28:30 crc kubenswrapper[4752]: E0122 10:28:30.419417 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\": container with ID starting with 924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49 not found: ID does not exist" containerID="924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49" Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.419456 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49"} err="failed to get container status \"924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\": rpc error: code = NotFound desc = could not find container \"924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49\": container with ID starting with 924b7ee5e6fb9cbb3b51cd8e6d5e754bcc6fd83b9018abf07899f6f704762c49 not found: ID does not exist" Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.419485 4752 scope.go:117] "RemoveContainer" containerID="eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea" Jan 22 10:28:30 crc kubenswrapper[4752]: E0122 10:28:30.420049 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\": container with ID starting with eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea not found: ID does not exist" containerID="eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea" Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.420071 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea"} err="failed to get container status \"eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\": rpc error: code = NotFound desc = could not find container \"eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea\": container with ID starting with eb2d07baf4190d2940a36465bd40664daec9f4399a8fa745229703ca11d396ea not found: ID does not exist" Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.420089 4752 scope.go:117] "RemoveContainer" containerID="a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c" Jan 22 10:28:30 crc kubenswrapper[4752]: E0122 10:28:30.420457 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\": container with ID starting with a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c not found: ID does not exist" containerID="a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c" Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.420490 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c"} err="failed to get container status \"a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\": rpc error: code = NotFound desc = could not find container \"a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c\": container with ID starting with a65cfc6240e26e6d3a49fdb9df38b8e1f5ddf7f1fb8c897efb3ab0f928706b9c not found: ID does not exist" Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.420508 4752 scope.go:117] "RemoveContainer" containerID="7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864" Jan 22 10:28:30 crc kubenswrapper[4752]: E0122 10:28:30.420810 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\": container with ID starting with 7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864 not found: ID does not exist" containerID="7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864" Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.420834 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864"} err="failed to get container status \"7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\": rpc error: code = NotFound desc = could not find container \"7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864\": container with ID starting with 7f95a950654795df73df7aeee65f426711fc75e022c02f33c6b818c3691dc864 not found: ID does not exist" Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.420869 4752 scope.go:117] "RemoveContainer" containerID="8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c" Jan 22 10:28:30 crc kubenswrapper[4752]: E0122 10:28:30.421201 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\": container with ID starting with 8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c not found: ID does not exist" containerID="8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c" Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.421221 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c"} err="failed to get container status \"8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\": rpc error: code = NotFound desc = could not find container \"8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c\": container with ID starting with 8b0360af35dd9722e563d2de517366ed24fac536ea0f0484b8a524637e0a7d0c not found: ID does not exist" Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.566738 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.674147 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/512fd0f5-4e67-429d-abe3-7eea327491ee-kube-api-access\") pod \"512fd0f5-4e67-429d-abe3-7eea327491ee\" (UID: \"512fd0f5-4e67-429d-abe3-7eea327491ee\") " Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.674232 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/512fd0f5-4e67-429d-abe3-7eea327491ee-kubelet-dir\") pod \"512fd0f5-4e67-429d-abe3-7eea327491ee\" (UID: \"512fd0f5-4e67-429d-abe3-7eea327491ee\") " Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.674289 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/512fd0f5-4e67-429d-abe3-7eea327491ee-var-lock\") pod \"512fd0f5-4e67-429d-abe3-7eea327491ee\" (UID: \"512fd0f5-4e67-429d-abe3-7eea327491ee\") " Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.674388 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/512fd0f5-4e67-429d-abe3-7eea327491ee-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "512fd0f5-4e67-429d-abe3-7eea327491ee" (UID: "512fd0f5-4e67-429d-abe3-7eea327491ee"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.674504 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/512fd0f5-4e67-429d-abe3-7eea327491ee-var-lock" (OuterVolumeSpecName: "var-lock") pod "512fd0f5-4e67-429d-abe3-7eea327491ee" (UID: "512fd0f5-4e67-429d-abe3-7eea327491ee"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.674548 4752 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/512fd0f5-4e67-429d-abe3-7eea327491ee-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.680907 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/512fd0f5-4e67-429d-abe3-7eea327491ee-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "512fd0f5-4e67-429d-abe3-7eea327491ee" (UID: "512fd0f5-4e67-429d-abe3-7eea327491ee"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.776128 4752 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/512fd0f5-4e67-429d-abe3-7eea327491ee-var-lock\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:30 crc kubenswrapper[4752]: I0122 10:28:30.776184 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/512fd0f5-4e67-429d-abe3-7eea327491ee-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:31 crc kubenswrapper[4752]: I0122 10:28:31.104386 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 22 10:28:31 crc kubenswrapper[4752]: I0122 10:28:31.299583 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"512fd0f5-4e67-429d-abe3-7eea327491ee","Type":"ContainerDied","Data":"0ee9cd40774f4f49b647b4d5f89cdcc3c9c31a8421d572c0b82d673eb18e0778"} Jan 22 10:28:31 crc kubenswrapper[4752]: I0122 10:28:31.299624 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ee9cd40774f4f49b647b4d5f89cdcc3c9c31a8421d572c0b82d673eb18e0778" Jan 22 10:28:31 crc kubenswrapper[4752]: I0122 10:28:31.299674 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 10:28:32 crc kubenswrapper[4752]: I0122 10:28:32.595466 4752 status_manager.go:851] "Failed to get status for pod" podUID="0d035039-51b4-41aa-9e33-db0a1ca24332" pod="openshift-marketplace/community-operators-4r8gj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-4r8gj\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:32 crc kubenswrapper[4752]: I0122 10:28:32.595797 4752 status_manager.go:851] "Failed to get status for pod" podUID="3c7b73c6-cc59-400a-858e-85af0b88a5b9" pod="openshift-marketplace/redhat-operators-lpphf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lpphf\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:32 crc kubenswrapper[4752]: I0122 10:28:32.596048 4752 status_manager.go:851] "Failed to get status for pod" podUID="0386a68f-2339-4ef6-8d96-e518b0682b4a" pod="openshift-marketplace/redhat-marketplace-j4jbd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-j4jbd\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:32 crc kubenswrapper[4752]: I0122 10:28:32.596266 4752 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:32 crc kubenswrapper[4752]: I0122 10:28:32.596484 4752 status_manager.go:851] "Failed to get status for pod" podUID="41faa779-87e9-41e1-a547-feba13612d57" pod="openshift-marketplace/redhat-marketplace-dblhf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dblhf\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:32 crc kubenswrapper[4752]: I0122 10:28:32.596697 4752 status_manager.go:851] "Failed to get status for pod" podUID="4bba5d5d-5eca-4d85-a64d-e127c8fe0e98" pod="openshift-marketplace/marketplace-operator-79b997595-gx9kk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-gx9kk\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:32 crc kubenswrapper[4752]: I0122 10:28:32.597035 4752 status_manager.go:851] "Failed to get status for pod" podUID="d7873b3a-ffed-4c96-818c-90117b142098" pod="openshift-marketplace/community-operators-z4qst" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-z4qst\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:32 crc kubenswrapper[4752]: I0122 10:28:32.597237 4752 status_manager.go:851] "Failed to get status for pod" podUID="4bba5d5d-5eca-4d85-a64d-e127c8fe0e98" pod="openshift-marketplace/marketplace-operator-79b997595-gx9kk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-gx9kk\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:32 crc kubenswrapper[4752]: I0122 10:28:32.597447 4752 status_manager.go:851] "Failed to get status for pod" podUID="eb0789a0-f347-4a6f-ba09-cd7cb558d5b5" pod="openshift-marketplace/certified-operators-x65n2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x65n2\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:32 crc kubenswrapper[4752]: I0122 10:28:32.597673 4752 status_manager.go:851] "Failed to get status for pod" podUID="71357b8c-126d-4119-943e-653febd0612d" pod="openshift-marketplace/certified-operators-x72kv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x72kv\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:32 crc kubenswrapper[4752]: I0122 10:28:32.597938 4752 status_manager.go:851] "Failed to get status for pod" podUID="0d035039-51b4-41aa-9e33-db0a1ca24332" pod="openshift-marketplace/community-operators-4r8gj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-4r8gj\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:32 crc kubenswrapper[4752]: I0122 10:28:32.599396 4752 status_manager.go:851] "Failed to get status for pod" podUID="3c7b73c6-cc59-400a-858e-85af0b88a5b9" pod="openshift-marketplace/redhat-operators-lpphf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lpphf\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:32 crc kubenswrapper[4752]: I0122 10:28:32.599801 4752 status_manager.go:851] "Failed to get status for pod" podUID="0386a68f-2339-4ef6-8d96-e518b0682b4a" pod="openshift-marketplace/redhat-marketplace-j4jbd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-j4jbd\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:32 crc kubenswrapper[4752]: I0122 10:28:32.601081 4752 status_manager.go:851] "Failed to get status for pod" podUID="41faa779-87e9-41e1-a547-feba13612d57" pod="openshift-marketplace/redhat-marketplace-dblhf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dblhf\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:32 crc kubenswrapper[4752]: E0122 10:28:32.601192 4752 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.67:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:28:32 crc kubenswrapper[4752]: I0122 10:28:32.601251 4752 status_manager.go:851] "Failed to get status for pod" podUID="512fd0f5-4e67-429d-abe3-7eea327491ee" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:32 crc kubenswrapper[4752]: I0122 10:28:32.601742 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:28:32 crc kubenswrapper[4752]: I0122 10:28:32.604193 4752 status_manager.go:851] "Failed to get status for pod" podUID="d7873b3a-ffed-4c96-818c-90117b142098" pod="openshift-marketplace/community-operators-z4qst" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-z4qst\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:32 crc kubenswrapper[4752]: I0122 10:28:32.604438 4752 status_manager.go:851] "Failed to get status for pod" podUID="4bba5d5d-5eca-4d85-a64d-e127c8fe0e98" pod="openshift-marketplace/marketplace-operator-79b997595-gx9kk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-gx9kk\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:32 crc kubenswrapper[4752]: I0122 10:28:32.604602 4752 status_manager.go:851] "Failed to get status for pod" podUID="71357b8c-126d-4119-943e-653febd0612d" pod="openshift-marketplace/certified-operators-x72kv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x72kv\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:32 crc kubenswrapper[4752]: I0122 10:28:32.604777 4752 status_manager.go:851] "Failed to get status for pod" podUID="eb0789a0-f347-4a6f-ba09-cd7cb558d5b5" pod="openshift-marketplace/certified-operators-x65n2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x65n2\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:32 crc kubenswrapper[4752]: I0122 10:28:32.604931 4752 status_manager.go:851] "Failed to get status for pod" podUID="0d035039-51b4-41aa-9e33-db0a1ca24332" pod="openshift-marketplace/community-operators-4r8gj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-4r8gj\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:32 crc kubenswrapper[4752]: I0122 10:28:32.605074 4752 status_manager.go:851] "Failed to get status for pod" podUID="3c7b73c6-cc59-400a-858e-85af0b88a5b9" pod="openshift-marketplace/redhat-operators-lpphf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lpphf\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:32 crc kubenswrapper[4752]: I0122 10:28:32.605423 4752 status_manager.go:851] "Failed to get status for pod" podUID="0386a68f-2339-4ef6-8d96-e518b0682b4a" pod="openshift-marketplace/redhat-marketplace-j4jbd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-j4jbd\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:32 crc kubenswrapper[4752]: I0122 10:28:32.605751 4752 status_manager.go:851] "Failed to get status for pod" podUID="41faa779-87e9-41e1-a547-feba13612d57" pod="openshift-marketplace/redhat-marketplace-dblhf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dblhf\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:32 crc kubenswrapper[4752]: I0122 10:28:32.606165 4752 status_manager.go:851] "Failed to get status for pod" podUID="512fd0f5-4e67-429d-abe3-7eea327491ee" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:33 crc kubenswrapper[4752]: I0122 10:28:33.320740 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"8c7283d40bbfdd4a5b7a929c5163d5c88310b664603551e7c798559040ae3e4e"} Jan 22 10:28:33 crc kubenswrapper[4752]: I0122 10:28:33.321297 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"aa4a2f57d09dc1f23db01d6df5aedf9f72bae51baf40878e11644e30a9877827"} Jan 22 10:28:33 crc kubenswrapper[4752]: E0122 10:28:33.321952 4752 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.67:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:28:33 crc kubenswrapper[4752]: I0122 10:28:33.322077 4752 status_manager.go:851] "Failed to get status for pod" podUID="eb0789a0-f347-4a6f-ba09-cd7cb558d5b5" pod="openshift-marketplace/certified-operators-x65n2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x65n2\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:33 crc kubenswrapper[4752]: I0122 10:28:33.322231 4752 status_manager.go:851] "Failed to get status for pod" podUID="71357b8c-126d-4119-943e-653febd0612d" pod="openshift-marketplace/certified-operators-x72kv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x72kv\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:33 crc kubenswrapper[4752]: I0122 10:28:33.322414 4752 status_manager.go:851] "Failed to get status for pod" podUID="0d035039-51b4-41aa-9e33-db0a1ca24332" pod="openshift-marketplace/community-operators-4r8gj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-4r8gj\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:33 crc kubenswrapper[4752]: I0122 10:28:33.322779 4752 status_manager.go:851] "Failed to get status for pod" podUID="3c7b73c6-cc59-400a-858e-85af0b88a5b9" pod="openshift-marketplace/redhat-operators-lpphf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lpphf\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:33 crc kubenswrapper[4752]: I0122 10:28:33.323258 4752 status_manager.go:851] "Failed to get status for pod" podUID="0386a68f-2339-4ef6-8d96-e518b0682b4a" pod="openshift-marketplace/redhat-marketplace-j4jbd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-j4jbd\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:33 crc kubenswrapper[4752]: I0122 10:28:33.323419 4752 status_manager.go:851] "Failed to get status for pod" podUID="41faa779-87e9-41e1-a547-feba13612d57" pod="openshift-marketplace/redhat-marketplace-dblhf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dblhf\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:33 crc kubenswrapper[4752]: I0122 10:28:33.323732 4752 status_manager.go:851] "Failed to get status for pod" podUID="512fd0f5-4e67-429d-abe3-7eea327491ee" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:33 crc kubenswrapper[4752]: I0122 10:28:33.323984 4752 status_manager.go:851] "Failed to get status for pod" podUID="d7873b3a-ffed-4c96-818c-90117b142098" pod="openshift-marketplace/community-operators-z4qst" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-z4qst\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:33 crc kubenswrapper[4752]: I0122 10:28:33.324254 4752 status_manager.go:851] "Failed to get status for pod" podUID="4bba5d5d-5eca-4d85-a64d-e127c8fe0e98" pod="openshift-marketplace/marketplace-operator-79b997595-gx9kk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-gx9kk\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:35 crc kubenswrapper[4752]: E0122 10:28:35.241603 4752 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:35 crc kubenswrapper[4752]: E0122 10:28:35.242212 4752 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:35 crc kubenswrapper[4752]: E0122 10:28:35.243084 4752 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:35 crc kubenswrapper[4752]: E0122 10:28:35.243495 4752 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:35 crc kubenswrapper[4752]: E0122 10:28:35.244011 4752 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:35 crc kubenswrapper[4752]: I0122 10:28:35.244057 4752 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 22 10:28:35 crc kubenswrapper[4752]: E0122 10:28:35.244501 4752 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.67:6443: connect: connection refused" interval="200ms" Jan 22 10:28:35 crc kubenswrapper[4752]: E0122 10:28:35.446294 4752 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.67:6443: connect: connection refused" interval="400ms" Jan 22 10:28:35 crc kubenswrapper[4752]: E0122 10:28:35.847842 4752 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.67:6443: connect: connection refused" interval="800ms" Jan 22 10:28:36 crc kubenswrapper[4752]: E0122 10:28:36.649198 4752 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.67:6443: connect: connection refused" interval="1.6s" Jan 22 10:28:38 crc kubenswrapper[4752]: E0122 10:28:38.250920 4752 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.67:6443: connect: connection refused" interval="3.2s" Jan 22 10:28:39 crc kubenswrapper[4752]: E0122 10:28:39.482231 4752 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events/machine-config-daemon-v6hm8.188d06c660d4581b\": dial tcp 38.129.56.67:6443: connect: connection refused" event=< Jan 22 10:28:39 crc kubenswrapper[4752]: &Event{ObjectMeta:{machine-config-daemon-v6hm8.188d06c660d4581b openshift-machine-config-operator 29214 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:machine-config-daemon-v6hm8,UID:eb8df70c-9474-4827-8831-f39fc6883d79,APIVersion:v1,ResourceVersion:26580,FieldPath:spec.containers{machine-config-daemon},},Reason:ProbeError,Message:Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused Jan 22 10:28:39 crc kubenswrapper[4752]: body: Jan 22 10:28:39 crc kubenswrapper[4752]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 10:27:57 +0000 UTC,LastTimestamp:2026-01-22 10:28:27.72475038 +0000 UTC m=+186.954693288,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 22 10:28:39 crc kubenswrapper[4752]: > Jan 22 10:28:39 crc kubenswrapper[4752]: I0122 10:28:39.979097 4752 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 22 10:28:39 crc kubenswrapper[4752]: I0122 10:28:39.979251 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 22 10:28:40 crc kubenswrapper[4752]: I0122 10:28:40.219205 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" Jan 22 10:28:40 crc kubenswrapper[4752]: I0122 10:28:40.220181 4752 status_manager.go:851] "Failed to get status for pod" podUID="845573f7-b60d-40ce-8a84-45507baa3934" pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-g8qkl\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:40 crc kubenswrapper[4752]: I0122 10:28:40.220659 4752 status_manager.go:851] "Failed to get status for pod" podUID="d7873b3a-ffed-4c96-818c-90117b142098" pod="openshift-marketplace/community-operators-z4qst" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-z4qst\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:40 crc kubenswrapper[4752]: I0122 10:28:40.221339 4752 status_manager.go:851] "Failed to get status for pod" podUID="4bba5d5d-5eca-4d85-a64d-e127c8fe0e98" pod="openshift-marketplace/marketplace-operator-79b997595-gx9kk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-gx9kk\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:40 crc kubenswrapper[4752]: I0122 10:28:40.221874 4752 status_manager.go:851] "Failed to get status for pod" podUID="71357b8c-126d-4119-943e-653febd0612d" pod="openshift-marketplace/certified-operators-x72kv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x72kv\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:40 crc kubenswrapper[4752]: I0122 10:28:40.222218 4752 status_manager.go:851] "Failed to get status for pod" podUID="eb0789a0-f347-4a6f-ba09-cd7cb558d5b5" pod="openshift-marketplace/certified-operators-x65n2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x65n2\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:40 crc kubenswrapper[4752]: I0122 10:28:40.222677 4752 status_manager.go:851] "Failed to get status for pod" podUID="0d035039-51b4-41aa-9e33-db0a1ca24332" pod="openshift-marketplace/community-operators-4r8gj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-4r8gj\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:40 crc kubenswrapper[4752]: I0122 10:28:40.223117 4752 status_manager.go:851] "Failed to get status for pod" podUID="3c7b73c6-cc59-400a-858e-85af0b88a5b9" pod="openshift-marketplace/redhat-operators-lpphf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lpphf\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:40 crc kubenswrapper[4752]: I0122 10:28:40.223433 4752 status_manager.go:851] "Failed to get status for pod" podUID="0386a68f-2339-4ef6-8d96-e518b0682b4a" pod="openshift-marketplace/redhat-marketplace-j4jbd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-j4jbd\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:40 crc kubenswrapper[4752]: I0122 10:28:40.223884 4752 status_manager.go:851] "Failed to get status for pod" podUID="41faa779-87e9-41e1-a547-feba13612d57" pod="openshift-marketplace/redhat-marketplace-dblhf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dblhf\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:40 crc kubenswrapper[4752]: I0122 10:28:40.224138 4752 status_manager.go:851] "Failed to get status for pod" podUID="512fd0f5-4e67-429d-abe3-7eea327491ee" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:40 crc kubenswrapper[4752]: E0122 10:28:40.227802 4752 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.129.56.67:6443: connect: connection refused" pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" volumeName="registry-storage" Jan 22 10:28:40 crc kubenswrapper[4752]: I0122 10:28:40.372288 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 22 10:28:40 crc kubenswrapper[4752]: I0122 10:28:40.372366 4752 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2" exitCode=1 Jan 22 10:28:40 crc kubenswrapper[4752]: I0122 10:28:40.372407 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2"} Jan 22 10:28:40 crc kubenswrapper[4752]: I0122 10:28:40.373069 4752 scope.go:117] "RemoveContainer" containerID="39b2897e5aacb6728bdf2909488375b2f757584e1e23ef3bd18f0e05ec1a66a2" Jan 22 10:28:40 crc kubenswrapper[4752]: I0122 10:28:40.373421 4752 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:40 crc kubenswrapper[4752]: I0122 10:28:40.373755 4752 status_manager.go:851] "Failed to get status for pod" podUID="eb0789a0-f347-4a6f-ba09-cd7cb558d5b5" pod="openshift-marketplace/certified-operators-x65n2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x65n2\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:40 crc kubenswrapper[4752]: I0122 10:28:40.374458 4752 status_manager.go:851] "Failed to get status for pod" podUID="71357b8c-126d-4119-943e-653febd0612d" pod="openshift-marketplace/certified-operators-x72kv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x72kv\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:40 crc kubenswrapper[4752]: I0122 10:28:40.374917 4752 status_manager.go:851] "Failed to get status for pod" podUID="0d035039-51b4-41aa-9e33-db0a1ca24332" pod="openshift-marketplace/community-operators-4r8gj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-4r8gj\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:40 crc kubenswrapper[4752]: I0122 10:28:40.375433 4752 status_manager.go:851] "Failed to get status for pod" podUID="3c7b73c6-cc59-400a-858e-85af0b88a5b9" pod="openshift-marketplace/redhat-operators-lpphf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lpphf\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:40 crc kubenswrapper[4752]: I0122 10:28:40.375929 4752 status_manager.go:851] "Failed to get status for pod" podUID="0386a68f-2339-4ef6-8d96-e518b0682b4a" pod="openshift-marketplace/redhat-marketplace-j4jbd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-j4jbd\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:40 crc kubenswrapper[4752]: I0122 10:28:40.376619 4752 status_manager.go:851] "Failed to get status for pod" podUID="41faa779-87e9-41e1-a547-feba13612d57" pod="openshift-marketplace/redhat-marketplace-dblhf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dblhf\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:40 crc kubenswrapper[4752]: I0122 10:28:40.377022 4752 status_manager.go:851] "Failed to get status for pod" podUID="512fd0f5-4e67-429d-abe3-7eea327491ee" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:40 crc kubenswrapper[4752]: I0122 10:28:40.377391 4752 status_manager.go:851] "Failed to get status for pod" podUID="845573f7-b60d-40ce-8a84-45507baa3934" pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-g8qkl\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:40 crc kubenswrapper[4752]: I0122 10:28:40.377690 4752 status_manager.go:851] "Failed to get status for pod" podUID="d7873b3a-ffed-4c96-818c-90117b142098" pod="openshift-marketplace/community-operators-z4qst" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-z4qst\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:40 crc kubenswrapper[4752]: I0122 10:28:40.378158 4752 status_manager.go:851] "Failed to get status for pod" podUID="4bba5d5d-5eca-4d85-a64d-e127c8fe0e98" pod="openshift-marketplace/marketplace-operator-79b997595-gx9kk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-gx9kk\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:40 crc kubenswrapper[4752]: I0122 10:28:40.569117 4752 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.097161 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.100614 4752 status_manager.go:851] "Failed to get status for pod" podUID="d7873b3a-ffed-4c96-818c-90117b142098" pod="openshift-marketplace/community-operators-z4qst" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-z4qst\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.101048 4752 status_manager.go:851] "Failed to get status for pod" podUID="4bba5d5d-5eca-4d85-a64d-e127c8fe0e98" pod="openshift-marketplace/marketplace-operator-79b997595-gx9kk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-gx9kk\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.101264 4752 status_manager.go:851] "Failed to get status for pod" podUID="eb0789a0-f347-4a6f-ba09-cd7cb558d5b5" pod="openshift-marketplace/certified-operators-x65n2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x65n2\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.101641 4752 status_manager.go:851] "Failed to get status for pod" podUID="71357b8c-126d-4119-943e-653febd0612d" pod="openshift-marketplace/certified-operators-x72kv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x72kv\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.102199 4752 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.102655 4752 status_manager.go:851] "Failed to get status for pod" podUID="0d035039-51b4-41aa-9e33-db0a1ca24332" pod="openshift-marketplace/community-operators-4r8gj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-4r8gj\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.102933 4752 status_manager.go:851] "Failed to get status for pod" podUID="3c7b73c6-cc59-400a-858e-85af0b88a5b9" pod="openshift-marketplace/redhat-operators-lpphf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lpphf\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.103151 4752 status_manager.go:851] "Failed to get status for pod" podUID="0386a68f-2339-4ef6-8d96-e518b0682b4a" pod="openshift-marketplace/redhat-marketplace-j4jbd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-j4jbd\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.103443 4752 status_manager.go:851] "Failed to get status for pod" podUID="41faa779-87e9-41e1-a547-feba13612d57" pod="openshift-marketplace/redhat-marketplace-dblhf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dblhf\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.103840 4752 status_manager.go:851] "Failed to get status for pod" podUID="512fd0f5-4e67-429d-abe3-7eea327491ee" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.104295 4752 status_manager.go:851] "Failed to get status for pod" podUID="845573f7-b60d-40ce-8a84-45507baa3934" pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-g8qkl\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.104781 4752 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.105040 4752 status_manager.go:851] "Failed to get status for pod" podUID="eb0789a0-f347-4a6f-ba09-cd7cb558d5b5" pod="openshift-marketplace/certified-operators-x65n2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x65n2\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.105277 4752 status_manager.go:851] "Failed to get status for pod" podUID="71357b8c-126d-4119-943e-653febd0612d" pod="openshift-marketplace/certified-operators-x72kv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x72kv\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.105570 4752 status_manager.go:851] "Failed to get status for pod" podUID="0d035039-51b4-41aa-9e33-db0a1ca24332" pod="openshift-marketplace/community-operators-4r8gj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-4r8gj\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.105916 4752 status_manager.go:851] "Failed to get status for pod" podUID="3c7b73c6-cc59-400a-858e-85af0b88a5b9" pod="openshift-marketplace/redhat-operators-lpphf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lpphf\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.106387 4752 status_manager.go:851] "Failed to get status for pod" podUID="0386a68f-2339-4ef6-8d96-e518b0682b4a" pod="openshift-marketplace/redhat-marketplace-j4jbd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-j4jbd\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.106900 4752 status_manager.go:851] "Failed to get status for pod" podUID="41faa779-87e9-41e1-a547-feba13612d57" pod="openshift-marketplace/redhat-marketplace-dblhf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dblhf\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.107254 4752 status_manager.go:851] "Failed to get status for pod" podUID="512fd0f5-4e67-429d-abe3-7eea327491ee" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.107599 4752 status_manager.go:851] "Failed to get status for pod" podUID="845573f7-b60d-40ce-8a84-45507baa3934" pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-g8qkl\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.107935 4752 status_manager.go:851] "Failed to get status for pod" podUID="d7873b3a-ffed-4c96-818c-90117b142098" pod="openshift-marketplace/community-operators-z4qst" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-z4qst\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.108324 4752 status_manager.go:851] "Failed to get status for pod" podUID="4bba5d5d-5eca-4d85-a64d-e127c8fe0e98" pod="openshift-marketplace/marketplace-operator-79b997595-gx9kk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-gx9kk\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.118338 4752 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5500f584-6d10-4bf4-8ec6-98157d49828c" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.118381 4752 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5500f584-6d10-4bf4-8ec6-98157d49828c" Jan 22 10:28:41 crc kubenswrapper[4752]: E0122 10:28:41.118913 4752 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.67:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.119607 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.159530 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.383537 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.383667 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"69bd1bb3006d6f064982f9fcc4c33bee93f6c19a3e1483baed042823a40283ef"} Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.384705 4752 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.384937 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"899754f17c3293311a19f1ecbf30f9ebc1ad712468f0038fdba50a967bfd0f52"} Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.385146 4752 status_manager.go:851] "Failed to get status for pod" podUID="eb0789a0-f347-4a6f-ba09-cd7cb558d5b5" pod="openshift-marketplace/certified-operators-x65n2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x65n2\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.385534 4752 status_manager.go:851] "Failed to get status for pod" podUID="71357b8c-126d-4119-943e-653febd0612d" pod="openshift-marketplace/certified-operators-x72kv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x72kv\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.386023 4752 status_manager.go:851] "Failed to get status for pod" podUID="3c7b73c6-cc59-400a-858e-85af0b88a5b9" pod="openshift-marketplace/redhat-operators-lpphf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lpphf\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.386558 4752 status_manager.go:851] "Failed to get status for pod" podUID="0d035039-51b4-41aa-9e33-db0a1ca24332" pod="openshift-marketplace/community-operators-4r8gj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-4r8gj\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.386996 4752 status_manager.go:851] "Failed to get status for pod" podUID="0386a68f-2339-4ef6-8d96-e518b0682b4a" pod="openshift-marketplace/redhat-marketplace-j4jbd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-j4jbd\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.387303 4752 status_manager.go:851] "Failed to get status for pod" podUID="41faa779-87e9-41e1-a547-feba13612d57" pod="openshift-marketplace/redhat-marketplace-dblhf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dblhf\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.388267 4752 status_manager.go:851] "Failed to get status for pod" podUID="512fd0f5-4e67-429d-abe3-7eea327491ee" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.388602 4752 status_manager.go:851] "Failed to get status for pod" podUID="845573f7-b60d-40ce-8a84-45507baa3934" pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-g8qkl\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.389057 4752 status_manager.go:851] "Failed to get status for pod" podUID="d7873b3a-ffed-4c96-818c-90117b142098" pod="openshift-marketplace/community-operators-z4qst" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-z4qst\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:41 crc kubenswrapper[4752]: I0122 10:28:41.389422 4752 status_manager.go:851] "Failed to get status for pod" podUID="4bba5d5d-5eca-4d85-a64d-e127c8fe0e98" pod="openshift-marketplace/marketplace-operator-79b997595-gx9kk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-gx9kk\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:41 crc kubenswrapper[4752]: E0122 10:28:41.452247 4752 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.67:6443: connect: connection refused" interval="6.4s" Jan 22 10:28:42 crc kubenswrapper[4752]: I0122 10:28:42.393276 4752 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="10cbffec38da579e1618c5d47ed27a5752c266c2c0645e5b0292f34cf434efb0" exitCode=0 Jan 22 10:28:42 crc kubenswrapper[4752]: I0122 10:28:42.393350 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"10cbffec38da579e1618c5d47ed27a5752c266c2c0645e5b0292f34cf434efb0"} Jan 22 10:28:42 crc kubenswrapper[4752]: I0122 10:28:42.393991 4752 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5500f584-6d10-4bf4-8ec6-98157d49828c" Jan 22 10:28:42 crc kubenswrapper[4752]: I0122 10:28:42.394033 4752 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5500f584-6d10-4bf4-8ec6-98157d49828c" Jan 22 10:28:42 crc kubenswrapper[4752]: I0122 10:28:42.394332 4752 status_manager.go:851] "Failed to get status for pod" podUID="eb0789a0-f347-4a6f-ba09-cd7cb558d5b5" pod="openshift-marketplace/certified-operators-x65n2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x65n2\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:42 crc kubenswrapper[4752]: E0122 10:28:42.395020 4752 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.67:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:28:42 crc kubenswrapper[4752]: I0122 10:28:42.395075 4752 status_manager.go:851] "Failed to get status for pod" podUID="71357b8c-126d-4119-943e-653febd0612d" pod="openshift-marketplace/certified-operators-x72kv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x72kv\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:42 crc kubenswrapper[4752]: I0122 10:28:42.395728 4752 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:42 crc kubenswrapper[4752]: I0122 10:28:42.396407 4752 status_manager.go:851] "Failed to get status for pod" podUID="0d035039-51b4-41aa-9e33-db0a1ca24332" pod="openshift-marketplace/community-operators-4r8gj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-4r8gj\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:42 crc kubenswrapper[4752]: I0122 10:28:42.396745 4752 status_manager.go:851] "Failed to get status for pod" podUID="3c7b73c6-cc59-400a-858e-85af0b88a5b9" pod="openshift-marketplace/redhat-operators-lpphf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lpphf\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:42 crc kubenswrapper[4752]: I0122 10:28:42.397272 4752 status_manager.go:851] "Failed to get status for pod" podUID="0386a68f-2339-4ef6-8d96-e518b0682b4a" pod="openshift-marketplace/redhat-marketplace-j4jbd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-j4jbd\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:42 crc kubenswrapper[4752]: I0122 10:28:42.397676 4752 status_manager.go:851] "Failed to get status for pod" podUID="41faa779-87e9-41e1-a547-feba13612d57" pod="openshift-marketplace/redhat-marketplace-dblhf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-dblhf\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:42 crc kubenswrapper[4752]: I0122 10:28:42.398062 4752 status_manager.go:851] "Failed to get status for pod" podUID="512fd0f5-4e67-429d-abe3-7eea327491ee" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:42 crc kubenswrapper[4752]: I0122 10:28:42.398474 4752 status_manager.go:851] "Failed to get status for pod" podUID="845573f7-b60d-40ce-8a84-45507baa3934" pod="openshift-image-registry/image-registry-66df7c8f76-g8qkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-g8qkl\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:42 crc kubenswrapper[4752]: I0122 10:28:42.399027 4752 status_manager.go:851] "Failed to get status for pod" podUID="d7873b3a-ffed-4c96-818c-90117b142098" pod="openshift-marketplace/community-operators-z4qst" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-z4qst\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:42 crc kubenswrapper[4752]: I0122 10:28:42.399456 4752 status_manager.go:851] "Failed to get status for pod" podUID="4bba5d5d-5eca-4d85-a64d-e127c8fe0e98" pod="openshift-marketplace/marketplace-operator-79b997595-gx9kk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-gx9kk\": dial tcp 38.129.56.67:6443: connect: connection refused" Jan 22 10:28:43 crc kubenswrapper[4752]: I0122 10:28:43.097574 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dx6f6" Jan 22 10:28:43 crc kubenswrapper[4752]: I0122 10:28:43.098511 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dx6f6" Jan 22 10:28:43 crc kubenswrapper[4752]: I0122 10:28:43.400693 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2bb62b2499cb32eb35b45e6c7a42e80305bff0729ce2168a5fed1589768844f2"} Jan 22 10:28:43 crc kubenswrapper[4752]: I0122 10:28:43.401049 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ad96b0dbf130a86d62a90728c87eb72a9f8aac53f1b9d7a4fb4f80013a3a9170"} Jan 22 10:28:43 crc kubenswrapper[4752]: I0122 10:28:43.401067 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"988a910a4172afd09e08f4c14dbe33494fdbca1bb1f79b0502b5f57f645d07dd"} Jan 22 10:28:44 crc kubenswrapper[4752]: I0122 10:28:44.423052 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e3bc0f0a135d84d721ac384eaf5bdbb45b6c92e58a073489d1bec524358f7a50"} Jan 22 10:28:44 crc kubenswrapper[4752]: I0122 10:28:44.424085 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"542c87de9f2005e7e27717f717b1cffb7731b9bf9dc4ce543f015d3e5eb783bd"} Jan 22 10:28:44 crc kubenswrapper[4752]: I0122 10:28:44.424111 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:28:44 crc kubenswrapper[4752]: I0122 10:28:44.423501 4752 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5500f584-6d10-4bf4-8ec6-98157d49828c" Jan 22 10:28:44 crc kubenswrapper[4752]: I0122 10:28:44.424138 4752 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5500f584-6d10-4bf4-8ec6-98157d49828c" Jan 22 10:28:46 crc kubenswrapper[4752]: I0122 10:28:46.120449 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:28:46 crc kubenswrapper[4752]: I0122 10:28:46.120519 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:28:46 crc kubenswrapper[4752]: I0122 10:28:46.129125 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:28:49 crc kubenswrapper[4752]: I0122 10:28:49.497178 4752 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:28:49 crc kubenswrapper[4752]: W0122 10:28:49.553196 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod565ffec2_a221_48d7_b657_a59c7dff1de1.slice/crio-5d51d5a121937ba9792e7afede2b6f158213b5f43e04f98126826f81ca77bb6a WatchSource:0}: Error finding container 5d51d5a121937ba9792e7afede2b6f158213b5f43e04f98126826f81ca77bb6a: Status 404 returned error can't find the container with id 5d51d5a121937ba9792e7afede2b6f158213b5f43e04f98126826f81ca77bb6a Jan 22 10:28:49 crc kubenswrapper[4752]: I0122 10:28:49.978863 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:28:50 crc kubenswrapper[4752]: I0122 10:28:50.459475 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-dx6f6_565ffec2-a221-48d7-b657-a59c7dff1de1/marketplace-operator/0.log" Jan 22 10:28:50 crc kubenswrapper[4752]: I0122 10:28:50.459554 4752 generic.go:334] "Generic (PLEG): container finished" podID="565ffec2-a221-48d7-b657-a59c7dff1de1" containerID="58fd2fa90abfae31d5468b0a1ece1c7aa53253ce9fdcadae1a20f11a49b09232" exitCode=1 Jan 22 10:28:50 crc kubenswrapper[4752]: I0122 10:28:50.459667 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dx6f6" event={"ID":"565ffec2-a221-48d7-b657-a59c7dff1de1","Type":"ContainerDied","Data":"58fd2fa90abfae31d5468b0a1ece1c7aa53253ce9fdcadae1a20f11a49b09232"} Jan 22 10:28:50 crc kubenswrapper[4752]: I0122 10:28:50.459725 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dx6f6" event={"ID":"565ffec2-a221-48d7-b657-a59c7dff1de1","Type":"ContainerStarted","Data":"5d51d5a121937ba9792e7afede2b6f158213b5f43e04f98126826f81ca77bb6a"} Jan 22 10:28:50 crc kubenswrapper[4752]: I0122 10:28:50.460191 4752 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5500f584-6d10-4bf4-8ec6-98157d49828c" Jan 22 10:28:50 crc kubenswrapper[4752]: I0122 10:28:50.460225 4752 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5500f584-6d10-4bf4-8ec6-98157d49828c" Jan 22 10:28:50 crc kubenswrapper[4752]: I0122 10:28:50.460389 4752 scope.go:117] "RemoveContainer" containerID="58fd2fa90abfae31d5468b0a1ece1c7aa53253ce9fdcadae1a20f11a49b09232" Jan 22 10:28:50 crc kubenswrapper[4752]: I0122 10:28:50.475804 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:28:51 crc kubenswrapper[4752]: I0122 10:28:51.133019 4752 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="355db772-2baf-4747-beb8-ce8c375acf0e" Jan 22 10:28:51 crc kubenswrapper[4752]: I0122 10:28:51.159594 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:28:51 crc kubenswrapper[4752]: I0122 10:28:51.159964 4752 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 22 10:28:51 crc kubenswrapper[4752]: I0122 10:28:51.160028 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 22 10:28:51 crc kubenswrapper[4752]: I0122 10:28:51.471228 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-dx6f6_565ffec2-a221-48d7-b657-a59c7dff1de1/marketplace-operator/1.log" Jan 22 10:28:51 crc kubenswrapper[4752]: I0122 10:28:51.472099 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-dx6f6_565ffec2-a221-48d7-b657-a59c7dff1de1/marketplace-operator/0.log" Jan 22 10:28:51 crc kubenswrapper[4752]: I0122 10:28:51.472220 4752 generic.go:334] "Generic (PLEG): container finished" podID="565ffec2-a221-48d7-b657-a59c7dff1de1" containerID="88e9391d996453b1483a4bc5e8a0196655199d72654d90b3354820fca85fedc0" exitCode=1 Jan 22 10:28:51 crc kubenswrapper[4752]: I0122 10:28:51.472371 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dx6f6" event={"ID":"565ffec2-a221-48d7-b657-a59c7dff1de1","Type":"ContainerDied","Data":"88e9391d996453b1483a4bc5e8a0196655199d72654d90b3354820fca85fedc0"} Jan 22 10:28:51 crc kubenswrapper[4752]: I0122 10:28:51.472459 4752 scope.go:117] "RemoveContainer" containerID="58fd2fa90abfae31d5468b0a1ece1c7aa53253ce9fdcadae1a20f11a49b09232" Jan 22 10:28:51 crc kubenswrapper[4752]: I0122 10:28:51.472788 4752 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5500f584-6d10-4bf4-8ec6-98157d49828c" Jan 22 10:28:51 crc kubenswrapper[4752]: I0122 10:28:51.472836 4752 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5500f584-6d10-4bf4-8ec6-98157d49828c" Jan 22 10:28:51 crc kubenswrapper[4752]: I0122 10:28:51.473283 4752 scope.go:117] "RemoveContainer" containerID="88e9391d996453b1483a4bc5e8a0196655199d72654d90b3354820fca85fedc0" Jan 22 10:28:51 crc kubenswrapper[4752]: E0122 10:28:51.473681 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-dx6f6_openshift-marketplace(565ffec2-a221-48d7-b657-a59c7dff1de1)\"" pod="openshift-marketplace/marketplace-operator-79b997595-dx6f6" podUID="565ffec2-a221-48d7-b657-a59c7dff1de1" Jan 22 10:28:51 crc kubenswrapper[4752]: I0122 10:28:51.486598 4752 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="355db772-2baf-4747-beb8-ce8c375acf0e" Jan 22 10:28:51 crc kubenswrapper[4752]: I0122 10:28:51.638274 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" podUID="c82bf83f-1d94-41a2-ad18-2264806dd9ff" containerName="oauth-openshift" containerID="cri-o://20505c59d4fe8a47321f95a2c582d933e2f1a70e84fa82dc99692c5e256a5483" gracePeriod=15 Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.049791 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.196297 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-user-template-provider-selection\") pod \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.196455 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-serving-cert\") pod \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.196529 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-user-template-login\") pod \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.196602 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-service-ca\") pod \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.196642 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-user-idp-0-file-data\") pod \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.196671 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-router-certs\") pod \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.196706 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-session\") pod \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.196741 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c82bf83f-1d94-41a2-ad18-2264806dd9ff-audit-policies\") pod \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.196756 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-cliconfig\") pod \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.196774 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-ocp-branding-template\") pod \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.196802 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64md4\" (UniqueName: \"kubernetes.io/projected/c82bf83f-1d94-41a2-ad18-2264806dd9ff-kube-api-access-64md4\") pod \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.196836 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-user-template-error\") pod \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.196868 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-trusted-ca-bundle\") pod \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.196891 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c82bf83f-1d94-41a2-ad18-2264806dd9ff-audit-dir\") pod \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\" (UID: \"c82bf83f-1d94-41a2-ad18-2264806dd9ff\") " Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.197191 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c82bf83f-1d94-41a2-ad18-2264806dd9ff-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "c82bf83f-1d94-41a2-ad18-2264806dd9ff" (UID: "c82bf83f-1d94-41a2-ad18-2264806dd9ff"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.197506 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "c82bf83f-1d94-41a2-ad18-2264806dd9ff" (UID: "c82bf83f-1d94-41a2-ad18-2264806dd9ff"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.197533 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c82bf83f-1d94-41a2-ad18-2264806dd9ff-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "c82bf83f-1d94-41a2-ad18-2264806dd9ff" (UID: "c82bf83f-1d94-41a2-ad18-2264806dd9ff"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.197563 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "c82bf83f-1d94-41a2-ad18-2264806dd9ff" (UID: "c82bf83f-1d94-41a2-ad18-2264806dd9ff"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.198033 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "c82bf83f-1d94-41a2-ad18-2264806dd9ff" (UID: "c82bf83f-1d94-41a2-ad18-2264806dd9ff"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.202413 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c82bf83f-1d94-41a2-ad18-2264806dd9ff-kube-api-access-64md4" (OuterVolumeSpecName: "kube-api-access-64md4") pod "c82bf83f-1d94-41a2-ad18-2264806dd9ff" (UID: "c82bf83f-1d94-41a2-ad18-2264806dd9ff"). InnerVolumeSpecName "kube-api-access-64md4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.202621 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "c82bf83f-1d94-41a2-ad18-2264806dd9ff" (UID: "c82bf83f-1d94-41a2-ad18-2264806dd9ff"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.203060 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "c82bf83f-1d94-41a2-ad18-2264806dd9ff" (UID: "c82bf83f-1d94-41a2-ad18-2264806dd9ff"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.203188 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "c82bf83f-1d94-41a2-ad18-2264806dd9ff" (UID: "c82bf83f-1d94-41a2-ad18-2264806dd9ff"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.203642 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "c82bf83f-1d94-41a2-ad18-2264806dd9ff" (UID: "c82bf83f-1d94-41a2-ad18-2264806dd9ff"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.203761 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "c82bf83f-1d94-41a2-ad18-2264806dd9ff" (UID: "c82bf83f-1d94-41a2-ad18-2264806dd9ff"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.203999 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "c82bf83f-1d94-41a2-ad18-2264806dd9ff" (UID: "c82bf83f-1d94-41a2-ad18-2264806dd9ff"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.204177 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "c82bf83f-1d94-41a2-ad18-2264806dd9ff" (UID: "c82bf83f-1d94-41a2-ad18-2264806dd9ff"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.204300 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "c82bf83f-1d94-41a2-ad18-2264806dd9ff" (UID: "c82bf83f-1d94-41a2-ad18-2264806dd9ff"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.298233 4752 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.298275 4752 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.298284 4752 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.298293 4752 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.298303 4752 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.298313 4752 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c82bf83f-1d94-41a2-ad18-2264806dd9ff-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.298323 4752 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.298333 4752 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.298342 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64md4\" (UniqueName: \"kubernetes.io/projected/c82bf83f-1d94-41a2-ad18-2264806dd9ff-kube-api-access-64md4\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.298350 4752 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.298359 4752 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.298370 4752 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c82bf83f-1d94-41a2-ad18-2264806dd9ff-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.298378 4752 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.298389 4752 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c82bf83f-1d94-41a2-ad18-2264806dd9ff-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.483142 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-dx6f6_565ffec2-a221-48d7-b657-a59c7dff1de1/marketplace-operator/1.log" Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.486589 4752 generic.go:334] "Generic (PLEG): container finished" podID="c82bf83f-1d94-41a2-ad18-2264806dd9ff" containerID="20505c59d4fe8a47321f95a2c582d933e2f1a70e84fa82dc99692c5e256a5483" exitCode=0 Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.486634 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" event={"ID":"c82bf83f-1d94-41a2-ad18-2264806dd9ff","Type":"ContainerDied","Data":"20505c59d4fe8a47321f95a2c582d933e2f1a70e84fa82dc99692c5e256a5483"} Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.486663 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" event={"ID":"c82bf83f-1d94-41a2-ad18-2264806dd9ff","Type":"ContainerDied","Data":"f7273e3e41e740468a28c65cea0dc1c137948fd320c682b1583937645b54274d"} Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.486685 4752 scope.go:117] "RemoveContainer" containerID="20505c59d4fe8a47321f95a2c582d933e2f1a70e84fa82dc99692c5e256a5483" Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.486754 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-zw6f2" Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.528468 4752 scope.go:117] "RemoveContainer" containerID="20505c59d4fe8a47321f95a2c582d933e2f1a70e84fa82dc99692c5e256a5483" Jan 22 10:28:52 crc kubenswrapper[4752]: E0122 10:28:52.529228 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20505c59d4fe8a47321f95a2c582d933e2f1a70e84fa82dc99692c5e256a5483\": container with ID starting with 20505c59d4fe8a47321f95a2c582d933e2f1a70e84fa82dc99692c5e256a5483 not found: ID does not exist" containerID="20505c59d4fe8a47321f95a2c582d933e2f1a70e84fa82dc99692c5e256a5483" Jan 22 10:28:52 crc kubenswrapper[4752]: I0122 10:28:52.529299 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20505c59d4fe8a47321f95a2c582d933e2f1a70e84fa82dc99692c5e256a5483"} err="failed to get container status \"20505c59d4fe8a47321f95a2c582d933e2f1a70e84fa82dc99692c5e256a5483\": rpc error: code = NotFound desc = could not find container \"20505c59d4fe8a47321f95a2c582d933e2f1a70e84fa82dc99692c5e256a5483\": container with ID starting with 20505c59d4fe8a47321f95a2c582d933e2f1a70e84fa82dc99692c5e256a5483 not found: ID does not exist" Jan 22 10:28:57 crc kubenswrapper[4752]: I0122 10:28:57.463632 4752 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-dx6f6" Jan 22 10:28:57 crc kubenswrapper[4752]: I0122 10:28:57.464184 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-dx6f6" Jan 22 10:28:57 crc kubenswrapper[4752]: I0122 10:28:57.464737 4752 scope.go:117] "RemoveContainer" containerID="88e9391d996453b1483a4bc5e8a0196655199d72654d90b3354820fca85fedc0" Jan 22 10:28:57 crc kubenswrapper[4752]: E0122 10:28:57.465195 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-dx6f6_openshift-marketplace(565ffec2-a221-48d7-b657-a59c7dff1de1)\"" pod="openshift-marketplace/marketplace-operator-79b997595-dx6f6" podUID="565ffec2-a221-48d7-b657-a59c7dff1de1" Jan 22 10:28:57 crc kubenswrapper[4752]: I0122 10:28:57.724320 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:28:57 crc kubenswrapper[4752]: I0122 10:28:57.724423 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:28:57 crc kubenswrapper[4752]: I0122 10:28:57.724474 4752 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 10:28:57 crc kubenswrapper[4752]: I0122 10:28:57.725069 4752 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4"} pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 10:28:57 crc kubenswrapper[4752]: I0122 10:28:57.725172 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" containerID="cri-o://d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4" gracePeriod=600 Jan 22 10:28:58 crc kubenswrapper[4752]: I0122 10:28:58.543388 4752 generic.go:334] "Generic (PLEG): container finished" podID="eb8df70c-9474-4827-8831-f39fc6883d79" containerID="d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4" exitCode=0 Jan 22 10:28:58 crc kubenswrapper[4752]: I0122 10:28:58.543497 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerDied","Data":"d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4"} Jan 22 10:28:58 crc kubenswrapper[4752]: I0122 10:28:58.544156 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerStarted","Data":"66d8fd85af8a62cbf6d844a6a3cd419c43895f7d9d194b9b69dabd0d0f78951a"} Jan 22 10:28:59 crc kubenswrapper[4752]: I0122 10:28:59.158700 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 22 10:29:00 crc kubenswrapper[4752]: I0122 10:29:00.236761 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 22 10:29:00 crc kubenswrapper[4752]: I0122 10:29:00.508121 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 22 10:29:01 crc kubenswrapper[4752]: I0122 10:29:01.097014 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 22 10:29:01 crc kubenswrapper[4752]: I0122 10:29:01.163770 4752 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 22 10:29:01 crc kubenswrapper[4752]: I0122 10:29:01.163934 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 22 10:29:01 crc kubenswrapper[4752]: I0122 10:29:01.312752 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 22 10:29:01 crc kubenswrapper[4752]: I0122 10:29:01.314272 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 22 10:29:01 crc kubenswrapper[4752]: I0122 10:29:01.369612 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 22 10:29:01 crc kubenswrapper[4752]: I0122 10:29:01.371803 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 22 10:29:01 crc kubenswrapper[4752]: I0122 10:29:01.425162 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 22 10:29:01 crc kubenswrapper[4752]: I0122 10:29:01.488454 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 22 10:29:01 crc kubenswrapper[4752]: I0122 10:29:01.504364 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 22 10:29:01 crc kubenswrapper[4752]: I0122 10:29:01.674680 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 22 10:29:01 crc kubenswrapper[4752]: I0122 10:29:01.861571 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 22 10:29:01 crc kubenswrapper[4752]: I0122 10:29:01.936374 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 10:29:02 crc kubenswrapper[4752]: I0122 10:29:02.144415 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 22 10:29:02 crc kubenswrapper[4752]: I0122 10:29:02.174406 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 22 10:29:02 crc kubenswrapper[4752]: I0122 10:29:02.275437 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 22 10:29:02 crc kubenswrapper[4752]: I0122 10:29:02.285316 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 22 10:29:02 crc kubenswrapper[4752]: I0122 10:29:02.471497 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 22 10:29:02 crc kubenswrapper[4752]: I0122 10:29:02.505817 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 22 10:29:02 crc kubenswrapper[4752]: I0122 10:29:02.544968 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 22 10:29:02 crc kubenswrapper[4752]: I0122 10:29:02.584487 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 22 10:29:02 crc kubenswrapper[4752]: I0122 10:29:02.734298 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 22 10:29:02 crc kubenswrapper[4752]: I0122 10:29:02.799338 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 22 10:29:03 crc kubenswrapper[4752]: I0122 10:29:03.016087 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 22 10:29:03 crc kubenswrapper[4752]: I0122 10:29:03.121486 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 22 10:29:03 crc kubenswrapper[4752]: I0122 10:29:03.157089 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 22 10:29:03 crc kubenswrapper[4752]: I0122 10:29:03.159190 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 22 10:29:03 crc kubenswrapper[4752]: I0122 10:29:03.401673 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 22 10:29:03 crc kubenswrapper[4752]: I0122 10:29:03.495713 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 22 10:29:03 crc kubenswrapper[4752]: I0122 10:29:03.534022 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 22 10:29:03 crc kubenswrapper[4752]: I0122 10:29:03.691297 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 22 10:29:03 crc kubenswrapper[4752]: I0122 10:29:03.733525 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 22 10:29:03 crc kubenswrapper[4752]: I0122 10:29:03.744125 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 22 10:29:03 crc kubenswrapper[4752]: I0122 10:29:03.744624 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 22 10:29:03 crc kubenswrapper[4752]: I0122 10:29:03.862131 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 22 10:29:03 crc kubenswrapper[4752]: I0122 10:29:03.981833 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 22 10:29:04 crc kubenswrapper[4752]: I0122 10:29:04.029181 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 22 10:29:04 crc kubenswrapper[4752]: I0122 10:29:04.100415 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 22 10:29:04 crc kubenswrapper[4752]: I0122 10:29:04.158711 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 10:29:04 crc kubenswrapper[4752]: I0122 10:29:04.172471 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 22 10:29:04 crc kubenswrapper[4752]: I0122 10:29:04.174621 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 22 10:29:04 crc kubenswrapper[4752]: I0122 10:29:04.269307 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 22 10:29:04 crc kubenswrapper[4752]: I0122 10:29:04.273328 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 22 10:29:04 crc kubenswrapper[4752]: I0122 10:29:04.293243 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 22 10:29:04 crc kubenswrapper[4752]: I0122 10:29:04.381456 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 22 10:29:04 crc kubenswrapper[4752]: I0122 10:29:04.408383 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 22 10:29:04 crc kubenswrapper[4752]: I0122 10:29:04.430410 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 22 10:29:04 crc kubenswrapper[4752]: I0122 10:29:04.456426 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 22 10:29:04 crc kubenswrapper[4752]: I0122 10:29:04.457019 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 22 10:29:04 crc kubenswrapper[4752]: I0122 10:29:04.536072 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 22 10:29:04 crc kubenswrapper[4752]: I0122 10:29:04.552787 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 22 10:29:04 crc kubenswrapper[4752]: I0122 10:29:04.556221 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 22 10:29:04 crc kubenswrapper[4752]: I0122 10:29:04.594503 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 22 10:29:04 crc kubenswrapper[4752]: I0122 10:29:04.767168 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 22 10:29:04 crc kubenswrapper[4752]: I0122 10:29:04.816725 4752 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 22 10:29:04 crc kubenswrapper[4752]: I0122 10:29:04.838373 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 22 10:29:04 crc kubenswrapper[4752]: I0122 10:29:04.869313 4752 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 22 10:29:04 crc kubenswrapper[4752]: I0122 10:29:04.875929 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 22 10:29:04 crc kubenswrapper[4752]: I0122 10:29:04.913245 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 22 10:29:05 crc kubenswrapper[4752]: I0122 10:29:05.028334 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 22 10:29:05 crc kubenswrapper[4752]: I0122 10:29:05.044653 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 22 10:29:05 crc kubenswrapper[4752]: I0122 10:29:05.256582 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 22 10:29:05 crc kubenswrapper[4752]: I0122 10:29:05.423535 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 10:29:05 crc kubenswrapper[4752]: I0122 10:29:05.519613 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 22 10:29:05 crc kubenswrapper[4752]: I0122 10:29:05.575538 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 22 10:29:05 crc kubenswrapper[4752]: I0122 10:29:05.780603 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 22 10:29:05 crc kubenswrapper[4752]: I0122 10:29:05.806344 4752 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 22 10:29:05 crc kubenswrapper[4752]: I0122 10:29:05.823441 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 22 10:29:05 crc kubenswrapper[4752]: I0122 10:29:05.862753 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 22 10:29:05 crc kubenswrapper[4752]: I0122 10:29:05.878809 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 22 10:29:05 crc kubenswrapper[4752]: I0122 10:29:05.884571 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 22 10:29:06 crc kubenswrapper[4752]: I0122 10:29:06.033134 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 22 10:29:06 crc kubenswrapper[4752]: I0122 10:29:06.058884 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 22 10:29:06 crc kubenswrapper[4752]: I0122 10:29:06.185952 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 22 10:29:06 crc kubenswrapper[4752]: I0122 10:29:06.319479 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 22 10:29:06 crc kubenswrapper[4752]: I0122 10:29:06.323015 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 22 10:29:06 crc kubenswrapper[4752]: I0122 10:29:06.354942 4752 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 22 10:29:06 crc kubenswrapper[4752]: I0122 10:29:06.370395 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 22 10:29:06 crc kubenswrapper[4752]: I0122 10:29:06.664522 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 22 10:29:06 crc kubenswrapper[4752]: I0122 10:29:06.684808 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 22 10:29:06 crc kubenswrapper[4752]: I0122 10:29:06.733789 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 22 10:29:06 crc kubenswrapper[4752]: I0122 10:29:06.774403 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 22 10:29:06 crc kubenswrapper[4752]: I0122 10:29:06.798660 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 22 10:29:06 crc kubenswrapper[4752]: I0122 10:29:06.799845 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 22 10:29:06 crc kubenswrapper[4752]: I0122 10:29:06.925718 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 10:29:06 crc kubenswrapper[4752]: I0122 10:29:06.946780 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 10:29:07 crc kubenswrapper[4752]: I0122 10:29:07.131529 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 22 10:29:07 crc kubenswrapper[4752]: I0122 10:29:07.154911 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 22 10:29:07 crc kubenswrapper[4752]: I0122 10:29:07.204724 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 22 10:29:07 crc kubenswrapper[4752]: I0122 10:29:07.229255 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 10:29:07 crc kubenswrapper[4752]: I0122 10:29:07.269635 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 22 10:29:07 crc kubenswrapper[4752]: I0122 10:29:07.480673 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 22 10:29:07 crc kubenswrapper[4752]: I0122 10:29:07.486438 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 22 10:29:07 crc kubenswrapper[4752]: I0122 10:29:07.582989 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 22 10:29:07 crc kubenswrapper[4752]: I0122 10:29:07.599091 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 22 10:29:07 crc kubenswrapper[4752]: I0122 10:29:07.793102 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 22 10:29:07 crc kubenswrapper[4752]: I0122 10:29:07.833096 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 22 10:29:07 crc kubenswrapper[4752]: I0122 10:29:07.843272 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 22 10:29:07 crc kubenswrapper[4752]: I0122 10:29:07.882234 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 22 10:29:08 crc kubenswrapper[4752]: I0122 10:29:08.008358 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 22 10:29:08 crc kubenswrapper[4752]: I0122 10:29:08.057450 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 22 10:29:08 crc kubenswrapper[4752]: I0122 10:29:08.106578 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 22 10:29:08 crc kubenswrapper[4752]: I0122 10:29:08.114152 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 22 10:29:08 crc kubenswrapper[4752]: I0122 10:29:08.163490 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 22 10:29:08 crc kubenswrapper[4752]: I0122 10:29:08.241201 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 22 10:29:08 crc kubenswrapper[4752]: I0122 10:29:08.298945 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 22 10:29:08 crc kubenswrapper[4752]: I0122 10:29:08.342042 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 22 10:29:08 crc kubenswrapper[4752]: I0122 10:29:08.343042 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 10:29:08 crc kubenswrapper[4752]: I0122 10:29:08.366135 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 22 10:29:08 crc kubenswrapper[4752]: I0122 10:29:08.432951 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 22 10:29:08 crc kubenswrapper[4752]: I0122 10:29:08.526450 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 22 10:29:08 crc kubenswrapper[4752]: I0122 10:29:08.541489 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 22 10:29:08 crc kubenswrapper[4752]: I0122 10:29:08.622896 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 10:29:08 crc kubenswrapper[4752]: I0122 10:29:08.689038 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 22 10:29:08 crc kubenswrapper[4752]: I0122 10:29:08.735382 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 22 10:29:08 crc kubenswrapper[4752]: I0122 10:29:08.788077 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 22 10:29:08 crc kubenswrapper[4752]: I0122 10:29:08.829353 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 22 10:29:08 crc kubenswrapper[4752]: I0122 10:29:08.831337 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 22 10:29:08 crc kubenswrapper[4752]: I0122 10:29:08.902294 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 22 10:29:09 crc kubenswrapper[4752]: I0122 10:29:09.018186 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 22 10:29:09 crc kubenswrapper[4752]: I0122 10:29:09.045605 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 22 10:29:09 crc kubenswrapper[4752]: I0122 10:29:09.047098 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 22 10:29:09 crc kubenswrapper[4752]: I0122 10:29:09.082305 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 22 10:29:09 crc kubenswrapper[4752]: I0122 10:29:09.209107 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 22 10:29:09 crc kubenswrapper[4752]: I0122 10:29:09.222937 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 22 10:29:09 crc kubenswrapper[4752]: I0122 10:29:09.284062 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 22 10:29:09 crc kubenswrapper[4752]: I0122 10:29:09.286492 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 22 10:29:09 crc kubenswrapper[4752]: I0122 10:29:09.326008 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 22 10:29:09 crc kubenswrapper[4752]: I0122 10:29:09.387041 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 22 10:29:09 crc kubenswrapper[4752]: I0122 10:29:09.578905 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 22 10:29:09 crc kubenswrapper[4752]: I0122 10:29:09.708293 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 22 10:29:09 crc kubenswrapper[4752]: I0122 10:29:09.739194 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 22 10:29:09 crc kubenswrapper[4752]: I0122 10:29:09.824059 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 22 10:29:09 crc kubenswrapper[4752]: I0122 10:29:09.938446 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 22 10:29:09 crc kubenswrapper[4752]: I0122 10:29:09.944556 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 22 10:29:09 crc kubenswrapper[4752]: I0122 10:29:09.982405 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 22 10:29:10 crc kubenswrapper[4752]: I0122 10:29:10.049076 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 22 10:29:10 crc kubenswrapper[4752]: I0122 10:29:10.072431 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 22 10:29:10 crc kubenswrapper[4752]: I0122 10:29:10.089998 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 22 10:29:10 crc kubenswrapper[4752]: I0122 10:29:10.104356 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 22 10:29:10 crc kubenswrapper[4752]: I0122 10:29:10.213510 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 22 10:29:10 crc kubenswrapper[4752]: I0122 10:29:10.236642 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 22 10:29:10 crc kubenswrapper[4752]: I0122 10:29:10.278951 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 22 10:29:10 crc kubenswrapper[4752]: I0122 10:29:10.350908 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 22 10:29:10 crc kubenswrapper[4752]: I0122 10:29:10.369612 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 22 10:29:10 crc kubenswrapper[4752]: I0122 10:29:10.562825 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 22 10:29:10 crc kubenswrapper[4752]: I0122 10:29:10.577526 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 22 10:29:10 crc kubenswrapper[4752]: I0122 10:29:10.578043 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 22 10:29:10 crc kubenswrapper[4752]: I0122 10:29:10.660673 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 22 10:29:10 crc kubenswrapper[4752]: I0122 10:29:10.670465 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 22 10:29:10 crc kubenswrapper[4752]: I0122 10:29:10.729027 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 22 10:29:10 crc kubenswrapper[4752]: I0122 10:29:10.736525 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 22 10:29:10 crc kubenswrapper[4752]: I0122 10:29:10.755499 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 22 10:29:10 crc kubenswrapper[4752]: I0122 10:29:10.939080 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 22 10:29:10 crc kubenswrapper[4752]: I0122 10:29:10.995935 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 22 10:29:11 crc kubenswrapper[4752]: I0122 10:29:11.004587 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 10:29:11 crc kubenswrapper[4752]: I0122 10:29:11.008105 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 22 10:29:11 crc kubenswrapper[4752]: I0122 10:29:11.034630 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 22 10:29:11 crc kubenswrapper[4752]: I0122 10:29:11.147180 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 22 10:29:11 crc kubenswrapper[4752]: I0122 10:29:11.167589 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:29:11 crc kubenswrapper[4752]: I0122 10:29:11.173092 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 10:29:11 crc kubenswrapper[4752]: I0122 10:29:11.189851 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 22 10:29:11 crc kubenswrapper[4752]: I0122 10:29:11.303512 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 22 10:29:11 crc kubenswrapper[4752]: I0122 10:29:11.333651 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 22 10:29:11 crc kubenswrapper[4752]: I0122 10:29:11.412521 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 22 10:29:11 crc kubenswrapper[4752]: I0122 10:29:11.444242 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 22 10:29:11 crc kubenswrapper[4752]: I0122 10:29:11.455112 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 10:29:11 crc kubenswrapper[4752]: I0122 10:29:11.613819 4752 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 22 10:29:11 crc kubenswrapper[4752]: I0122 10:29:11.620113 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 22 10:29:11 crc kubenswrapper[4752]: I0122 10:29:11.687161 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 22 10:29:11 crc kubenswrapper[4752]: I0122 10:29:11.701533 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 22 10:29:11 crc kubenswrapper[4752]: I0122 10:29:11.771393 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 22 10:29:11 crc kubenswrapper[4752]: I0122 10:29:11.838122 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 10:29:11 crc kubenswrapper[4752]: I0122 10:29:11.887913 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 22 10:29:11 crc kubenswrapper[4752]: I0122 10:29:11.893165 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 22 10:29:11 crc kubenswrapper[4752]: I0122 10:29:11.896659 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 22 10:29:11 crc kubenswrapper[4752]: I0122 10:29:11.896814 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 22 10:29:11 crc kubenswrapper[4752]: I0122 10:29:11.954013 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.017233 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.024056 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.207016 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.253560 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.287803 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.374952 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.388759 4752 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.396393 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-zw6f2","openshift-marketplace/community-operators-z4qst","openshift-marketplace/community-operators-4r8gj","openshift-marketplace/marketplace-operator-79b997595-gx9kk","openshift-marketplace/redhat-operators-lpphf","openshift-marketplace/certified-operators-x65n2","openshift-marketplace/redhat-marketplace-j4jbd","openshift-marketplace/redhat-marketplace-dblhf","openshift-marketplace/certified-operators-x72kv"] Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.396528 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-56c7c74f4-6x9sd","openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 10:29:12 crc kubenswrapper[4752]: E0122 10:29:12.396775 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="512fd0f5-4e67-429d-abe3-7eea327491ee" containerName="installer" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.396793 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="512fd0f5-4e67-429d-abe3-7eea327491ee" containerName="installer" Jan 22 10:29:12 crc kubenswrapper[4752]: E0122 10:29:12.396810 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c7b73c6-cc59-400a-858e-85af0b88a5b9" containerName="extract-utilities" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.396824 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c7b73c6-cc59-400a-858e-85af0b88a5b9" containerName="extract-utilities" Jan 22 10:29:12 crc kubenswrapper[4752]: E0122 10:29:12.396845 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c7b73c6-cc59-400a-858e-85af0b88a5b9" containerName="extract-content" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.396891 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c7b73c6-cc59-400a-858e-85af0b88a5b9" containerName="extract-content" Jan 22 10:29:12 crc kubenswrapper[4752]: E0122 10:29:12.396911 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41faa779-87e9-41e1-a547-feba13612d57" containerName="registry-server" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.396925 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="41faa779-87e9-41e1-a547-feba13612d57" containerName="registry-server" Jan 22 10:29:12 crc kubenswrapper[4752]: E0122 10:29:12.396950 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41faa779-87e9-41e1-a547-feba13612d57" containerName="extract-content" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.396964 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="41faa779-87e9-41e1-a547-feba13612d57" containerName="extract-content" Jan 22 10:29:12 crc kubenswrapper[4752]: E0122 10:29:12.396983 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d035039-51b4-41aa-9e33-db0a1ca24332" containerName="extract-content" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.396995 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d035039-51b4-41aa-9e33-db0a1ca24332" containerName="extract-content" Jan 22 10:29:12 crc kubenswrapper[4752]: E0122 10:29:12.397011 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d035039-51b4-41aa-9e33-db0a1ca24332" containerName="extract-utilities" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.397024 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d035039-51b4-41aa-9e33-db0a1ca24332" containerName="extract-utilities" Jan 22 10:29:12 crc kubenswrapper[4752]: E0122 10:29:12.397039 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bba5d5d-5eca-4d85-a64d-e127c8fe0e98" containerName="marketplace-operator" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.397053 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bba5d5d-5eca-4d85-a64d-e127c8fe0e98" containerName="marketplace-operator" Jan 22 10:29:12 crc kubenswrapper[4752]: E0122 10:29:12.397074 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0386a68f-2339-4ef6-8d96-e518b0682b4a" containerName="registry-server" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.397087 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="0386a68f-2339-4ef6-8d96-e518b0682b4a" containerName="registry-server" Jan 22 10:29:12 crc kubenswrapper[4752]: E0122 10:29:12.397107 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d035039-51b4-41aa-9e33-db0a1ca24332" containerName="registry-server" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.397120 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d035039-51b4-41aa-9e33-db0a1ca24332" containerName="registry-server" Jan 22 10:29:12 crc kubenswrapper[4752]: E0122 10:29:12.397137 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0386a68f-2339-4ef6-8d96-e518b0682b4a" containerName="extract-content" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.397149 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="0386a68f-2339-4ef6-8d96-e518b0682b4a" containerName="extract-content" Jan 22 10:29:12 crc kubenswrapper[4752]: E0122 10:29:12.397165 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41faa779-87e9-41e1-a547-feba13612d57" containerName="extract-utilities" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.397179 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="41faa779-87e9-41e1-a547-feba13612d57" containerName="extract-utilities" Jan 22 10:29:12 crc kubenswrapper[4752]: E0122 10:29:12.397199 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0386a68f-2339-4ef6-8d96-e518b0682b4a" containerName="extract-utilities" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.397215 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="0386a68f-2339-4ef6-8d96-e518b0682b4a" containerName="extract-utilities" Jan 22 10:29:12 crc kubenswrapper[4752]: E0122 10:29:12.397234 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c82bf83f-1d94-41a2-ad18-2264806dd9ff" containerName="oauth-openshift" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.397252 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="c82bf83f-1d94-41a2-ad18-2264806dd9ff" containerName="oauth-openshift" Jan 22 10:29:12 crc kubenswrapper[4752]: E0122 10:29:12.397272 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c7b73c6-cc59-400a-858e-85af0b88a5b9" containerName="registry-server" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.397287 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c7b73c6-cc59-400a-858e-85af0b88a5b9" containerName="registry-server" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.397226 4752 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5500f584-6d10-4bf4-8ec6-98157d49828c" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.397360 4752 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5500f584-6d10-4bf4-8ec6-98157d49828c" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.397446 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d035039-51b4-41aa-9e33-db0a1ca24332" containerName="registry-server" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.397465 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="c82bf83f-1d94-41a2-ad18-2264806dd9ff" containerName="oauth-openshift" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.397484 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="512fd0f5-4e67-429d-abe3-7eea327491ee" containerName="installer" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.397497 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="41faa779-87e9-41e1-a547-feba13612d57" containerName="registry-server" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.397510 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c7b73c6-cc59-400a-858e-85af0b88a5b9" containerName="registry-server" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.397532 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bba5d5d-5eca-4d85-a64d-e127c8fe0e98" containerName="marketplace-operator" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.397551 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="0386a68f-2339-4ef6-8d96-e518b0682b4a" containerName="registry-server" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.398104 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dx6f6"] Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.398317 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.399066 4752 scope.go:117] "RemoveContainer" containerID="88e9391d996453b1483a4bc5e8a0196655199d72654d90b3354820fca85fedc0" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.404296 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.404910 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.405039 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.405094 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.405423 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.405116 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.405136 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.405165 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.405579 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.406482 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.406601 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.406955 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.411830 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.421309 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.432361 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.432765 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.447924 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.462130 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=23.462108265 podStartE2EDuration="23.462108265s" podCreationTimestamp="2026-01-22 10:28:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:29:12.459419429 +0000 UTC m=+231.689362347" watchObservedRunningTime="2026-01-22 10:29:12.462108265 +0000 UTC m=+231.692051173" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.476778 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.517062 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.574912 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/518c0a57-bb02-4569-8371-df9309b6efc4-v4-0-config-system-session\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.574962 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/518c0a57-bb02-4569-8371-df9309b6efc4-v4-0-config-user-template-error\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.574987 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/518c0a57-bb02-4569-8371-df9309b6efc4-audit-dir\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.575077 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/518c0a57-bb02-4569-8371-df9309b6efc4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.575167 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/518c0a57-bb02-4569-8371-df9309b6efc4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.575230 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/518c0a57-bb02-4569-8371-df9309b6efc4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.575253 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/518c0a57-bb02-4569-8371-df9309b6efc4-v4-0-config-system-router-certs\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.575285 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/518c0a57-bb02-4569-8371-df9309b6efc4-audit-policies\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.575310 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/518c0a57-bb02-4569-8371-df9309b6efc4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.575387 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/518c0a57-bb02-4569-8371-df9309b6efc4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.575420 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/518c0a57-bb02-4569-8371-df9309b6efc4-v4-0-config-system-service-ca\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.575514 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbzfg\" (UniqueName: \"kubernetes.io/projected/518c0a57-bb02-4569-8371-df9309b6efc4-kube-api-access-dbzfg\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.575630 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/518c0a57-bb02-4569-8371-df9309b6efc4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.575659 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/518c0a57-bb02-4569-8371-df9309b6efc4-v4-0-config-user-template-login\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.584579 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.633905 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-dx6f6_565ffec2-a221-48d7-b657-a59c7dff1de1/marketplace-operator/1.log" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.634310 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dx6f6" event={"ID":"565ffec2-a221-48d7-b657-a59c7dff1de1","Type":"ContainerStarted","Data":"ba72c3ae10a18f579a1081bf32dc338b4faa99c558814e971c5e8f7e100c77ac"} Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.634793 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-dx6f6" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.636679 4752 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-dx6f6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.636720 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-dx6f6" podUID="565ffec2-a221-48d7-b657-a59c7dff1de1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.654088 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-dx6f6" podStartSLOduration=46.65407041 podStartE2EDuration="46.65407041s" podCreationTimestamp="2026-01-22 10:28:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:29:12.651685502 +0000 UTC m=+231.881628410" watchObservedRunningTime="2026-01-22 10:29:12.65407041 +0000 UTC m=+231.884013318" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.676522 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/518c0a57-bb02-4569-8371-df9309b6efc4-v4-0-config-system-service-ca\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.677384 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.677428 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/518c0a57-bb02-4569-8371-df9309b6efc4-v4-0-config-system-service-ca\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.677766 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbzfg\" (UniqueName: \"kubernetes.io/projected/518c0a57-bb02-4569-8371-df9309b6efc4-kube-api-access-dbzfg\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.677847 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/518c0a57-bb02-4569-8371-df9309b6efc4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.678258 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/518c0a57-bb02-4569-8371-df9309b6efc4-v4-0-config-user-template-login\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.678291 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/518c0a57-bb02-4569-8371-df9309b6efc4-v4-0-config-user-template-error\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.678942 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/518c0a57-bb02-4569-8371-df9309b6efc4-v4-0-config-system-session\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.678966 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/518c0a57-bb02-4569-8371-df9309b6efc4-audit-dir\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.678996 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/518c0a57-bb02-4569-8371-df9309b6efc4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.679018 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/518c0a57-bb02-4569-8371-df9309b6efc4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.679044 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/518c0a57-bb02-4569-8371-df9309b6efc4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.679059 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/518c0a57-bb02-4569-8371-df9309b6efc4-v4-0-config-system-router-certs\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.679075 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/518c0a57-bb02-4569-8371-df9309b6efc4-audit-policies\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.679088 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/518c0a57-bb02-4569-8371-df9309b6efc4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.679116 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/518c0a57-bb02-4569-8371-df9309b6efc4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.679280 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/518c0a57-bb02-4569-8371-df9309b6efc4-audit-dir\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.680012 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/518c0a57-bb02-4569-8371-df9309b6efc4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.682159 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/518c0a57-bb02-4569-8371-df9309b6efc4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.683206 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/518c0a57-bb02-4569-8371-df9309b6efc4-audit-policies\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.683405 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/518c0a57-bb02-4569-8371-df9309b6efc4-v4-0-config-user-template-login\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.683522 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/518c0a57-bb02-4569-8371-df9309b6efc4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.684272 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/518c0a57-bb02-4569-8371-df9309b6efc4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.684710 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/518c0a57-bb02-4569-8371-df9309b6efc4-v4-0-config-system-session\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.685577 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.685810 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/518c0a57-bb02-4569-8371-df9309b6efc4-v4-0-config-user-template-error\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.686986 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/518c0a57-bb02-4569-8371-df9309b6efc4-v4-0-config-system-router-certs\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.687299 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/518c0a57-bb02-4569-8371-df9309b6efc4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.687424 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.690547 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/518c0a57-bb02-4569-8371-df9309b6efc4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.700073 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbzfg\" (UniqueName: \"kubernetes.io/projected/518c0a57-bb02-4569-8371-df9309b6efc4-kube-api-access-dbzfg\") pod \"oauth-openshift-56c7c74f4-6x9sd\" (UID: \"518c0a57-bb02-4569-8371-df9309b6efc4\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.704440 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.705536 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.740945 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.747513 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.748420 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.770805 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.803166 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.816287 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.825263 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.892332 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.922329 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 22 10:29:12 crc kubenswrapper[4752]: I0122 10:29:12.941700 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-56c7c74f4-6x9sd"] Jan 22 10:29:12 crc kubenswrapper[4752]: W0122 10:29:12.947920 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod518c0a57_bb02_4569_8371_df9309b6efc4.slice/crio-506bdc7444e692ee3956c8fd08421e76a40bdec21a8995e460b3a7d39774909b WatchSource:0}: Error finding container 506bdc7444e692ee3956c8fd08421e76a40bdec21a8995e460b3a7d39774909b: Status 404 returned error can't find the container with id 506bdc7444e692ee3956c8fd08421e76a40bdec21a8995e460b3a7d39774909b Jan 22 10:29:13 crc kubenswrapper[4752]: I0122 10:29:13.012085 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 22 10:29:13 crc kubenswrapper[4752]: I0122 10:29:13.052899 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 22 10:29:13 crc kubenswrapper[4752]: I0122 10:29:13.110213 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0386a68f-2339-4ef6-8d96-e518b0682b4a" path="/var/lib/kubelet/pods/0386a68f-2339-4ef6-8d96-e518b0682b4a/volumes" Jan 22 10:29:13 crc kubenswrapper[4752]: I0122 10:29:13.110909 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d035039-51b4-41aa-9e33-db0a1ca24332" path="/var/lib/kubelet/pods/0d035039-51b4-41aa-9e33-db0a1ca24332/volumes" Jan 22 10:29:13 crc kubenswrapper[4752]: I0122 10:29:13.112256 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c7b73c6-cc59-400a-858e-85af0b88a5b9" path="/var/lib/kubelet/pods/3c7b73c6-cc59-400a-858e-85af0b88a5b9/volumes" Jan 22 10:29:13 crc kubenswrapper[4752]: I0122 10:29:13.116153 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41faa779-87e9-41e1-a547-feba13612d57" path="/var/lib/kubelet/pods/41faa779-87e9-41e1-a547-feba13612d57/volumes" Jan 22 10:29:13 crc kubenswrapper[4752]: I0122 10:29:13.119028 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bba5d5d-5eca-4d85-a64d-e127c8fe0e98" path="/var/lib/kubelet/pods/4bba5d5d-5eca-4d85-a64d-e127c8fe0e98/volumes" Jan 22 10:29:13 crc kubenswrapper[4752]: I0122 10:29:13.122404 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71357b8c-126d-4119-943e-653febd0612d" path="/var/lib/kubelet/pods/71357b8c-126d-4119-943e-653febd0612d/volumes" Jan 22 10:29:13 crc kubenswrapper[4752]: I0122 10:29:13.123043 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c82bf83f-1d94-41a2-ad18-2264806dd9ff" path="/var/lib/kubelet/pods/c82bf83f-1d94-41a2-ad18-2264806dd9ff/volumes" Jan 22 10:29:13 crc kubenswrapper[4752]: I0122 10:29:13.125567 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7873b3a-ffed-4c96-818c-90117b142098" path="/var/lib/kubelet/pods/d7873b3a-ffed-4c96-818c-90117b142098/volumes" Jan 22 10:29:13 crc kubenswrapper[4752]: I0122 10:29:13.135345 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb0789a0-f347-4a6f-ba09-cd7cb558d5b5" path="/var/lib/kubelet/pods/eb0789a0-f347-4a6f-ba09-cd7cb558d5b5/volumes" Jan 22 10:29:13 crc kubenswrapper[4752]: I0122 10:29:13.160637 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 22 10:29:13 crc kubenswrapper[4752]: I0122 10:29:13.199166 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 22 10:29:13 crc kubenswrapper[4752]: I0122 10:29:13.243922 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 22 10:29:13 crc kubenswrapper[4752]: I0122 10:29:13.317821 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 22 10:29:13 crc kubenswrapper[4752]: I0122 10:29:13.344032 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 22 10:29:13 crc kubenswrapper[4752]: I0122 10:29:13.371180 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 22 10:29:13 crc kubenswrapper[4752]: I0122 10:29:13.419434 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 22 10:29:13 crc kubenswrapper[4752]: I0122 10:29:13.444820 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 22 10:29:13 crc kubenswrapper[4752]: I0122 10:29:13.498350 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 22 10:29:13 crc kubenswrapper[4752]: I0122 10:29:13.640185 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" event={"ID":"518c0a57-bb02-4569-8371-df9309b6efc4","Type":"ContainerStarted","Data":"506bdc7444e692ee3956c8fd08421e76a40bdec21a8995e460b3a7d39774909b"} Jan 22 10:29:13 crc kubenswrapper[4752]: I0122 10:29:13.644562 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-dx6f6" Jan 22 10:29:13 crc kubenswrapper[4752]: I0122 10:29:13.825265 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 22 10:29:13 crc kubenswrapper[4752]: I0122 10:29:13.899249 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 22 10:29:14 crc kubenswrapper[4752]: I0122 10:29:14.098118 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 22 10:29:14 crc kubenswrapper[4752]: I0122 10:29:14.208784 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 22 10:29:14 crc kubenswrapper[4752]: I0122 10:29:14.334822 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 22 10:29:14 crc kubenswrapper[4752]: I0122 10:29:14.393877 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 22 10:29:14 crc kubenswrapper[4752]: I0122 10:29:14.394021 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 22 10:29:14 crc kubenswrapper[4752]: I0122 10:29:14.451334 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 22 10:29:14 crc kubenswrapper[4752]: I0122 10:29:14.495305 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 22 10:29:14 crc kubenswrapper[4752]: I0122 10:29:14.589047 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 22 10:29:14 crc kubenswrapper[4752]: I0122 10:29:14.646776 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" event={"ID":"518c0a57-bb02-4569-8371-df9309b6efc4","Type":"ContainerStarted","Data":"dc1ab8ed7ffd0de83df2953cc6df512bf49556bab3897f1c36ec98599274adf0"} Jan 22 10:29:14 crc kubenswrapper[4752]: I0122 10:29:14.678521 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" podStartSLOduration=48.678505324 podStartE2EDuration="48.678505324s" podCreationTimestamp="2026-01-22 10:28:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:29:14.676378033 +0000 UTC m=+233.906321031" watchObservedRunningTime="2026-01-22 10:29:14.678505324 +0000 UTC m=+233.908448232" Jan 22 10:29:14 crc kubenswrapper[4752]: I0122 10:29:14.707043 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 22 10:29:14 crc kubenswrapper[4752]: I0122 10:29:14.887266 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 22 10:29:14 crc kubenswrapper[4752]: I0122 10:29:14.932373 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 22 10:29:15 crc kubenswrapper[4752]: I0122 10:29:15.030148 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 22 10:29:15 crc kubenswrapper[4752]: I0122 10:29:15.186103 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 22 10:29:15 crc kubenswrapper[4752]: I0122 10:29:15.315995 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 22 10:29:15 crc kubenswrapper[4752]: I0122 10:29:15.512003 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 10:29:15 crc kubenswrapper[4752]: I0122 10:29:15.530695 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 22 10:29:15 crc kubenswrapper[4752]: I0122 10:29:15.653160 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:15 crc kubenswrapper[4752]: I0122 10:29:15.660323 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-56c7c74f4-6x9sd" Jan 22 10:29:15 crc kubenswrapper[4752]: I0122 10:29:15.725768 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 22 10:29:16 crc kubenswrapper[4752]: I0122 10:29:16.302911 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 22 10:29:19 crc kubenswrapper[4752]: I0122 10:29:19.479915 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xn8dz"] Jan 22 10:29:22 crc kubenswrapper[4752]: I0122 10:29:22.288664 4752 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 10:29:22 crc kubenswrapper[4752]: I0122 10:29:22.289152 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://8c7283d40bbfdd4a5b7a929c5163d5c88310b664603551e7c798559040ae3e4e" gracePeriod=5 Jan 22 10:29:27 crc kubenswrapper[4752]: I0122 10:29:27.737396 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 22 10:29:27 crc kubenswrapper[4752]: I0122 10:29:27.737640 4752 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="8c7283d40bbfdd4a5b7a929c5163d5c88310b664603551e7c798559040ae3e4e" exitCode=137 Jan 22 10:29:27 crc kubenswrapper[4752]: I0122 10:29:27.934557 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 22 10:29:27 crc kubenswrapper[4752]: I0122 10:29:27.934645 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:29:28 crc kubenswrapper[4752]: I0122 10:29:28.063358 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 10:29:28 crc kubenswrapper[4752]: I0122 10:29:28.063488 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 10:29:28 crc kubenswrapper[4752]: I0122 10:29:28.063553 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 10:29:28 crc kubenswrapper[4752]: I0122 10:29:28.063598 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 10:29:28 crc kubenswrapper[4752]: I0122 10:29:28.063661 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 10:29:28 crc kubenswrapper[4752]: I0122 10:29:28.064164 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:29:28 crc kubenswrapper[4752]: I0122 10:29:28.064659 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:29:28 crc kubenswrapper[4752]: I0122 10:29:28.064729 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:29:28 crc kubenswrapper[4752]: I0122 10:29:28.064780 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:29:28 crc kubenswrapper[4752]: I0122 10:29:28.075637 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:29:28 crc kubenswrapper[4752]: I0122 10:29:28.165128 4752 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 10:29:28 crc kubenswrapper[4752]: I0122 10:29:28.165385 4752 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 22 10:29:28 crc kubenswrapper[4752]: I0122 10:29:28.165478 4752 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 22 10:29:28 crc kubenswrapper[4752]: I0122 10:29:28.165578 4752 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 22 10:29:28 crc kubenswrapper[4752]: I0122 10:29:28.165670 4752 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 10:29:28 crc kubenswrapper[4752]: I0122 10:29:28.747427 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 22 10:29:28 crc kubenswrapper[4752]: I0122 10:29:28.747515 4752 scope.go:117] "RemoveContainer" containerID="8c7283d40bbfdd4a5b7a929c5163d5c88310b664603551e7c798559040ae3e4e" Jan 22 10:29:28 crc kubenswrapper[4752]: I0122 10:29:28.747669 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 10:29:29 crc kubenswrapper[4752]: I0122 10:29:29.107521 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 22 10:29:35 crc kubenswrapper[4752]: I0122 10:29:35.822684 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jfg7t"] Jan 22 10:29:35 crc kubenswrapper[4752]: E0122 10:29:35.823933 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 22 10:29:35 crc kubenswrapper[4752]: I0122 10:29:35.823963 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 22 10:29:35 crc kubenswrapper[4752]: I0122 10:29:35.824197 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 22 10:29:35 crc kubenswrapper[4752]: I0122 10:29:35.825935 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jfg7t" Jan 22 10:29:35 crc kubenswrapper[4752]: I0122 10:29:35.829990 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 22 10:29:35 crc kubenswrapper[4752]: I0122 10:29:35.834830 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jfg7t"] Jan 22 10:29:35 crc kubenswrapper[4752]: I0122 10:29:35.919352 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa720af1-e5fa-4ace-8113-3a6ab884f290-utilities\") pod \"community-operators-jfg7t\" (UID: \"fa720af1-e5fa-4ace-8113-3a6ab884f290\") " pod="openshift-marketplace/community-operators-jfg7t" Jan 22 10:29:36 crc kubenswrapper[4752]: I0122 10:29:36.009300 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cdmk8"] Jan 22 10:29:36 crc kubenswrapper[4752]: I0122 10:29:36.011344 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdmk8" Jan 22 10:29:36 crc kubenswrapper[4752]: I0122 10:29:36.018083 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 22 10:29:36 crc kubenswrapper[4752]: I0122 10:29:36.020246 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa720af1-e5fa-4ace-8113-3a6ab884f290-catalog-content\") pod \"community-operators-jfg7t\" (UID: \"fa720af1-e5fa-4ace-8113-3a6ab884f290\") " pod="openshift-marketplace/community-operators-jfg7t" Jan 22 10:29:36 crc kubenswrapper[4752]: I0122 10:29:36.020314 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4n8g\" (UniqueName: \"kubernetes.io/projected/fa720af1-e5fa-4ace-8113-3a6ab884f290-kube-api-access-r4n8g\") pod \"community-operators-jfg7t\" (UID: \"fa720af1-e5fa-4ace-8113-3a6ab884f290\") " pod="openshift-marketplace/community-operators-jfg7t" Jan 22 10:29:36 crc kubenswrapper[4752]: I0122 10:29:36.020385 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa720af1-e5fa-4ace-8113-3a6ab884f290-utilities\") pod \"community-operators-jfg7t\" (UID: \"fa720af1-e5fa-4ace-8113-3a6ab884f290\") " pod="openshift-marketplace/community-operators-jfg7t" Jan 22 10:29:36 crc kubenswrapper[4752]: I0122 10:29:36.021194 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa720af1-e5fa-4ace-8113-3a6ab884f290-utilities\") pod \"community-operators-jfg7t\" (UID: \"fa720af1-e5fa-4ace-8113-3a6ab884f290\") " pod="openshift-marketplace/community-operators-jfg7t" Jan 22 10:29:36 crc kubenswrapper[4752]: I0122 10:29:36.025630 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cdmk8"] Jan 22 10:29:36 crc kubenswrapper[4752]: I0122 10:29:36.122080 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2lhq\" (UniqueName: \"kubernetes.io/projected/422c8d2a-f9fe-4807-88c3-874a4b062612-kube-api-access-m2lhq\") pod \"certified-operators-cdmk8\" (UID: \"422c8d2a-f9fe-4807-88c3-874a4b062612\") " pod="openshift-marketplace/certified-operators-cdmk8" Jan 22 10:29:36 crc kubenswrapper[4752]: I0122 10:29:36.122145 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa720af1-e5fa-4ace-8113-3a6ab884f290-catalog-content\") pod \"community-operators-jfg7t\" (UID: \"fa720af1-e5fa-4ace-8113-3a6ab884f290\") " pod="openshift-marketplace/community-operators-jfg7t" Jan 22 10:29:36 crc kubenswrapper[4752]: I0122 10:29:36.122399 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4n8g\" (UniqueName: \"kubernetes.io/projected/fa720af1-e5fa-4ace-8113-3a6ab884f290-kube-api-access-r4n8g\") pod \"community-operators-jfg7t\" (UID: \"fa720af1-e5fa-4ace-8113-3a6ab884f290\") " pod="openshift-marketplace/community-operators-jfg7t" Jan 22 10:29:36 crc kubenswrapper[4752]: I0122 10:29:36.122468 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/422c8d2a-f9fe-4807-88c3-874a4b062612-catalog-content\") pod \"certified-operators-cdmk8\" (UID: \"422c8d2a-f9fe-4807-88c3-874a4b062612\") " pod="openshift-marketplace/certified-operators-cdmk8" Jan 22 10:29:36 crc kubenswrapper[4752]: I0122 10:29:36.122490 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa720af1-e5fa-4ace-8113-3a6ab884f290-catalog-content\") pod \"community-operators-jfg7t\" (UID: \"fa720af1-e5fa-4ace-8113-3a6ab884f290\") " pod="openshift-marketplace/community-operators-jfg7t" Jan 22 10:29:36 crc kubenswrapper[4752]: I0122 10:29:36.122521 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/422c8d2a-f9fe-4807-88c3-874a4b062612-utilities\") pod \"certified-operators-cdmk8\" (UID: \"422c8d2a-f9fe-4807-88c3-874a4b062612\") " pod="openshift-marketplace/certified-operators-cdmk8" Jan 22 10:29:36 crc kubenswrapper[4752]: I0122 10:29:36.144372 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4n8g\" (UniqueName: \"kubernetes.io/projected/fa720af1-e5fa-4ace-8113-3a6ab884f290-kube-api-access-r4n8g\") pod \"community-operators-jfg7t\" (UID: \"fa720af1-e5fa-4ace-8113-3a6ab884f290\") " pod="openshift-marketplace/community-operators-jfg7t" Jan 22 10:29:36 crc kubenswrapper[4752]: I0122 10:29:36.157772 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jfg7t" Jan 22 10:29:36 crc kubenswrapper[4752]: I0122 10:29:36.223185 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/422c8d2a-f9fe-4807-88c3-874a4b062612-catalog-content\") pod \"certified-operators-cdmk8\" (UID: \"422c8d2a-f9fe-4807-88c3-874a4b062612\") " pod="openshift-marketplace/certified-operators-cdmk8" Jan 22 10:29:36 crc kubenswrapper[4752]: I0122 10:29:36.223234 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/422c8d2a-f9fe-4807-88c3-874a4b062612-utilities\") pod \"certified-operators-cdmk8\" (UID: \"422c8d2a-f9fe-4807-88c3-874a4b062612\") " pod="openshift-marketplace/certified-operators-cdmk8" Jan 22 10:29:36 crc kubenswrapper[4752]: I0122 10:29:36.223342 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2lhq\" (UniqueName: \"kubernetes.io/projected/422c8d2a-f9fe-4807-88c3-874a4b062612-kube-api-access-m2lhq\") pod \"certified-operators-cdmk8\" (UID: \"422c8d2a-f9fe-4807-88c3-874a4b062612\") " pod="openshift-marketplace/certified-operators-cdmk8" Jan 22 10:29:36 crc kubenswrapper[4752]: I0122 10:29:36.224511 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/422c8d2a-f9fe-4807-88c3-874a4b062612-utilities\") pod \"certified-operators-cdmk8\" (UID: \"422c8d2a-f9fe-4807-88c3-874a4b062612\") " pod="openshift-marketplace/certified-operators-cdmk8" Jan 22 10:29:36 crc kubenswrapper[4752]: I0122 10:29:36.224714 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/422c8d2a-f9fe-4807-88c3-874a4b062612-catalog-content\") pod \"certified-operators-cdmk8\" (UID: \"422c8d2a-f9fe-4807-88c3-874a4b062612\") " pod="openshift-marketplace/certified-operators-cdmk8" Jan 22 10:29:36 crc kubenswrapper[4752]: I0122 10:29:36.242769 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2lhq\" (UniqueName: \"kubernetes.io/projected/422c8d2a-f9fe-4807-88c3-874a4b062612-kube-api-access-m2lhq\") pod \"certified-operators-cdmk8\" (UID: \"422c8d2a-f9fe-4807-88c3-874a4b062612\") " pod="openshift-marketplace/certified-operators-cdmk8" Jan 22 10:29:36 crc kubenswrapper[4752]: I0122 10:29:36.327692 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdmk8" Jan 22 10:29:36 crc kubenswrapper[4752]: I0122 10:29:36.412615 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jfg7t"] Jan 22 10:29:36 crc kubenswrapper[4752]: W0122 10:29:36.430630 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa720af1_e5fa_4ace_8113_3a6ab884f290.slice/crio-dd61eb149202d0c8cdb4a0c362e29cd9ece1151d6adfad9dd7c0b78d4c201bed WatchSource:0}: Error finding container dd61eb149202d0c8cdb4a0c362e29cd9ece1151d6adfad9dd7c0b78d4c201bed: Status 404 returned error can't find the container with id dd61eb149202d0c8cdb4a0c362e29cd9ece1151d6adfad9dd7c0b78d4c201bed Jan 22 10:29:36 crc kubenswrapper[4752]: I0122 10:29:36.598378 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cdmk8"] Jan 22 10:29:36 crc kubenswrapper[4752]: W0122 10:29:36.606186 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod422c8d2a_f9fe_4807_88c3_874a4b062612.slice/crio-c230ffa55525975f2fe672d6c468a92fdda7e8fd382dfa4d9b81d91bc858b594 WatchSource:0}: Error finding container c230ffa55525975f2fe672d6c468a92fdda7e8fd382dfa4d9b81d91bc858b594: Status 404 returned error can't find the container with id c230ffa55525975f2fe672d6c468a92fdda7e8fd382dfa4d9b81d91bc858b594 Jan 22 10:29:36 crc kubenswrapper[4752]: I0122 10:29:36.802057 4752 generic.go:334] "Generic (PLEG): container finished" podID="422c8d2a-f9fe-4807-88c3-874a4b062612" containerID="03ab387d66cef87224731aff0dfc2846d8fa9c2e23c04ea2cde42457d65ae945" exitCode=0 Jan 22 10:29:36 crc kubenswrapper[4752]: I0122 10:29:36.802190 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdmk8" event={"ID":"422c8d2a-f9fe-4807-88c3-874a4b062612","Type":"ContainerDied","Data":"03ab387d66cef87224731aff0dfc2846d8fa9c2e23c04ea2cde42457d65ae945"} Jan 22 10:29:36 crc kubenswrapper[4752]: I0122 10:29:36.802535 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdmk8" event={"ID":"422c8d2a-f9fe-4807-88c3-874a4b062612","Type":"ContainerStarted","Data":"c230ffa55525975f2fe672d6c468a92fdda7e8fd382dfa4d9b81d91bc858b594"} Jan 22 10:29:36 crc kubenswrapper[4752]: I0122 10:29:36.805263 4752 generic.go:334] "Generic (PLEG): container finished" podID="fa720af1-e5fa-4ace-8113-3a6ab884f290" containerID="28c450228bf46c5903b7ab6a1d156a506079bfd5d4ae94499bf55d8c8b699181" exitCode=0 Jan 22 10:29:36 crc kubenswrapper[4752]: I0122 10:29:36.805324 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jfg7t" event={"ID":"fa720af1-e5fa-4ace-8113-3a6ab884f290","Type":"ContainerDied","Data":"28c450228bf46c5903b7ab6a1d156a506079bfd5d4ae94499bf55d8c8b699181"} Jan 22 10:29:36 crc kubenswrapper[4752]: I0122 10:29:36.805359 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jfg7t" event={"ID":"fa720af1-e5fa-4ace-8113-3a6ab884f290","Type":"ContainerStarted","Data":"dd61eb149202d0c8cdb4a0c362e29cd9ece1151d6adfad9dd7c0b78d4c201bed"} Jan 22 10:29:37 crc kubenswrapper[4752]: I0122 10:29:37.816198 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdmk8" event={"ID":"422c8d2a-f9fe-4807-88c3-874a4b062612","Type":"ContainerStarted","Data":"aabf104cce6c5c3fd2ac4c73c8abbd7c9c028c1ff79876f93a25fba80e2dd131"} Jan 22 10:29:37 crc kubenswrapper[4752]: I0122 10:29:37.822328 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jfg7t" event={"ID":"fa720af1-e5fa-4ace-8113-3a6ab884f290","Type":"ContainerStarted","Data":"9e68e18a0112f5a4bb314baaec7ded46935b36acc70d4134990eb68e370d5d85"} Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.211291 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-44kl8"] Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.213284 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-44kl8" Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.215642 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.229634 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-44kl8"] Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.354738 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbc2g\" (UniqueName: \"kubernetes.io/projected/d494f5bf-f094-40ab-8edb-8205291118fb-kube-api-access-lbc2g\") pod \"redhat-marketplace-44kl8\" (UID: \"d494f5bf-f094-40ab-8edb-8205291118fb\") " pod="openshift-marketplace/redhat-marketplace-44kl8" Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.354830 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d494f5bf-f094-40ab-8edb-8205291118fb-utilities\") pod \"redhat-marketplace-44kl8\" (UID: \"d494f5bf-f094-40ab-8edb-8205291118fb\") " pod="openshift-marketplace/redhat-marketplace-44kl8" Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.354976 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d494f5bf-f094-40ab-8edb-8205291118fb-catalog-content\") pod \"redhat-marketplace-44kl8\" (UID: \"d494f5bf-f094-40ab-8edb-8205291118fb\") " pod="openshift-marketplace/redhat-marketplace-44kl8" Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.404615 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cg62z"] Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.405578 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cg62z" Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.407941 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.425986 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cg62z"] Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.456231 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d494f5bf-f094-40ab-8edb-8205291118fb-catalog-content\") pod \"redhat-marketplace-44kl8\" (UID: \"d494f5bf-f094-40ab-8edb-8205291118fb\") " pod="openshift-marketplace/redhat-marketplace-44kl8" Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.456485 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbc2g\" (UniqueName: \"kubernetes.io/projected/d494f5bf-f094-40ab-8edb-8205291118fb-kube-api-access-lbc2g\") pod \"redhat-marketplace-44kl8\" (UID: \"d494f5bf-f094-40ab-8edb-8205291118fb\") " pod="openshift-marketplace/redhat-marketplace-44kl8" Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.456580 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d494f5bf-f094-40ab-8edb-8205291118fb-utilities\") pod \"redhat-marketplace-44kl8\" (UID: \"d494f5bf-f094-40ab-8edb-8205291118fb\") " pod="openshift-marketplace/redhat-marketplace-44kl8" Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.457333 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d494f5bf-f094-40ab-8edb-8205291118fb-catalog-content\") pod \"redhat-marketplace-44kl8\" (UID: \"d494f5bf-f094-40ab-8edb-8205291118fb\") " pod="openshift-marketplace/redhat-marketplace-44kl8" Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.457347 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d494f5bf-f094-40ab-8edb-8205291118fb-utilities\") pod \"redhat-marketplace-44kl8\" (UID: \"d494f5bf-f094-40ab-8edb-8205291118fb\") " pod="openshift-marketplace/redhat-marketplace-44kl8" Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.487413 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbc2g\" (UniqueName: \"kubernetes.io/projected/d494f5bf-f094-40ab-8edb-8205291118fb-kube-api-access-lbc2g\") pod \"redhat-marketplace-44kl8\" (UID: \"d494f5bf-f094-40ab-8edb-8205291118fb\") " pod="openshift-marketplace/redhat-marketplace-44kl8" Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.544474 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-44kl8" Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.557840 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bda21db-8dc2-405f-bf37-7e60cc78a98b-catalog-content\") pod \"redhat-operators-cg62z\" (UID: \"6bda21db-8dc2-405f-bf37-7e60cc78a98b\") " pod="openshift-marketplace/redhat-operators-cg62z" Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.557921 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bda21db-8dc2-405f-bf37-7e60cc78a98b-utilities\") pod \"redhat-operators-cg62z\" (UID: \"6bda21db-8dc2-405f-bf37-7e60cc78a98b\") " pod="openshift-marketplace/redhat-operators-cg62z" Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.558239 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4nwg\" (UniqueName: \"kubernetes.io/projected/6bda21db-8dc2-405f-bf37-7e60cc78a98b-kube-api-access-q4nwg\") pod \"redhat-operators-cg62z\" (UID: \"6bda21db-8dc2-405f-bf37-7e60cc78a98b\") " pod="openshift-marketplace/redhat-operators-cg62z" Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.662129 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bda21db-8dc2-405f-bf37-7e60cc78a98b-catalog-content\") pod \"redhat-operators-cg62z\" (UID: \"6bda21db-8dc2-405f-bf37-7e60cc78a98b\") " pod="openshift-marketplace/redhat-operators-cg62z" Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.662210 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bda21db-8dc2-405f-bf37-7e60cc78a98b-utilities\") pod \"redhat-operators-cg62z\" (UID: \"6bda21db-8dc2-405f-bf37-7e60cc78a98b\") " pod="openshift-marketplace/redhat-operators-cg62z" Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.662273 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4nwg\" (UniqueName: \"kubernetes.io/projected/6bda21db-8dc2-405f-bf37-7e60cc78a98b-kube-api-access-q4nwg\") pod \"redhat-operators-cg62z\" (UID: \"6bda21db-8dc2-405f-bf37-7e60cc78a98b\") " pod="openshift-marketplace/redhat-operators-cg62z" Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.662979 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bda21db-8dc2-405f-bf37-7e60cc78a98b-catalog-content\") pod \"redhat-operators-cg62z\" (UID: \"6bda21db-8dc2-405f-bf37-7e60cc78a98b\") " pod="openshift-marketplace/redhat-operators-cg62z" Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.663366 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bda21db-8dc2-405f-bf37-7e60cc78a98b-utilities\") pod \"redhat-operators-cg62z\" (UID: \"6bda21db-8dc2-405f-bf37-7e60cc78a98b\") " pod="openshift-marketplace/redhat-operators-cg62z" Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.695672 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4nwg\" (UniqueName: \"kubernetes.io/projected/6bda21db-8dc2-405f-bf37-7e60cc78a98b-kube-api-access-q4nwg\") pod \"redhat-operators-cg62z\" (UID: \"6bda21db-8dc2-405f-bf37-7e60cc78a98b\") " pod="openshift-marketplace/redhat-operators-cg62z" Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.737284 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cg62z" Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.805660 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-44kl8"] Jan 22 10:29:38 crc kubenswrapper[4752]: W0122 10:29:38.811437 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd494f5bf_f094_40ab_8edb_8205291118fb.slice/crio-b1c662486cb5f009348fccbd87de589cd06e60b11533cec1856c4d79c1939906 WatchSource:0}: Error finding container b1c662486cb5f009348fccbd87de589cd06e60b11533cec1856c4d79c1939906: Status 404 returned error can't find the container with id b1c662486cb5f009348fccbd87de589cd06e60b11533cec1856c4d79c1939906 Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.832092 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44kl8" event={"ID":"d494f5bf-f094-40ab-8edb-8205291118fb","Type":"ContainerStarted","Data":"b1c662486cb5f009348fccbd87de589cd06e60b11533cec1856c4d79c1939906"} Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.833977 4752 generic.go:334] "Generic (PLEG): container finished" podID="fa720af1-e5fa-4ace-8113-3a6ab884f290" containerID="9e68e18a0112f5a4bb314baaec7ded46935b36acc70d4134990eb68e370d5d85" exitCode=0 Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.834044 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jfg7t" event={"ID":"fa720af1-e5fa-4ace-8113-3a6ab884f290","Type":"ContainerDied","Data":"9e68e18a0112f5a4bb314baaec7ded46935b36acc70d4134990eb68e370d5d85"} Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.836925 4752 generic.go:334] "Generic (PLEG): container finished" podID="422c8d2a-f9fe-4807-88c3-874a4b062612" containerID="aabf104cce6c5c3fd2ac4c73c8abbd7c9c028c1ff79876f93a25fba80e2dd131" exitCode=0 Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.836964 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdmk8" event={"ID":"422c8d2a-f9fe-4807-88c3-874a4b062612","Type":"ContainerDied","Data":"aabf104cce6c5c3fd2ac4c73c8abbd7c9c028c1ff79876f93a25fba80e2dd131"} Jan 22 10:29:38 crc kubenswrapper[4752]: I0122 10:29:38.917786 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cg62z"] Jan 22 10:29:39 crc kubenswrapper[4752]: I0122 10:29:39.844709 4752 generic.go:334] "Generic (PLEG): container finished" podID="6bda21db-8dc2-405f-bf37-7e60cc78a98b" containerID="0514554c0f0c732f519aac44f4dfdd994202dc6f0bf9b030a6f407df11cb97b5" exitCode=0 Jan 22 10:29:39 crc kubenswrapper[4752]: I0122 10:29:39.844839 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cg62z" event={"ID":"6bda21db-8dc2-405f-bf37-7e60cc78a98b","Type":"ContainerDied","Data":"0514554c0f0c732f519aac44f4dfdd994202dc6f0bf9b030a6f407df11cb97b5"} Jan 22 10:29:39 crc kubenswrapper[4752]: I0122 10:29:39.844903 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cg62z" event={"ID":"6bda21db-8dc2-405f-bf37-7e60cc78a98b","Type":"ContainerStarted","Data":"22b54ecf37a3573e923834d0e1fec118618a818328eb904c9998c2626dfec6ff"} Jan 22 10:29:39 crc kubenswrapper[4752]: I0122 10:29:39.849198 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdmk8" event={"ID":"422c8d2a-f9fe-4807-88c3-874a4b062612","Type":"ContainerStarted","Data":"ef86f04979d35a84a83ae4139884bd428236d29914c17582231de1beda39dd8c"} Jan 22 10:29:39 crc kubenswrapper[4752]: I0122 10:29:39.855521 4752 generic.go:334] "Generic (PLEG): container finished" podID="d494f5bf-f094-40ab-8edb-8205291118fb" containerID="998c7a448a235c99b626b4f44964baea6d3b040fb36e1a9c8bbd61497154f352" exitCode=0 Jan 22 10:29:39 crc kubenswrapper[4752]: I0122 10:29:39.855577 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44kl8" event={"ID":"d494f5bf-f094-40ab-8edb-8205291118fb","Type":"ContainerDied","Data":"998c7a448a235c99b626b4f44964baea6d3b040fb36e1a9c8bbd61497154f352"} Jan 22 10:29:39 crc kubenswrapper[4752]: I0122 10:29:39.859775 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jfg7t" event={"ID":"fa720af1-e5fa-4ace-8113-3a6ab884f290","Type":"ContainerStarted","Data":"d277b99ed45568ec3dd039a73863510145868224eaaf3a6353a1c08aa5b4a948"} Jan 22 10:29:39 crc kubenswrapper[4752]: I0122 10:29:39.916681 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jfg7t" podStartSLOduration=2.491928344 podStartE2EDuration="4.916652043s" podCreationTimestamp="2026-01-22 10:29:35 +0000 UTC" firstStartedPulling="2026-01-22 10:29:36.807898257 +0000 UTC m=+256.037841175" lastFinishedPulling="2026-01-22 10:29:39.232621966 +0000 UTC m=+258.462564874" observedRunningTime="2026-01-22 10:29:39.913634307 +0000 UTC m=+259.143577215" watchObservedRunningTime="2026-01-22 10:29:39.916652043 +0000 UTC m=+259.146594961" Jan 22 10:29:39 crc kubenswrapper[4752]: I0122 10:29:39.933730 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cdmk8" podStartSLOduration=2.488387623 podStartE2EDuration="4.933698827s" podCreationTimestamp="2026-01-22 10:29:35 +0000 UTC" firstStartedPulling="2026-01-22 10:29:36.803637126 +0000 UTC m=+256.033580034" lastFinishedPulling="2026-01-22 10:29:39.24894833 +0000 UTC m=+258.478891238" observedRunningTime="2026-01-22 10:29:39.933528932 +0000 UTC m=+259.163471880" watchObservedRunningTime="2026-01-22 10:29:39.933698827 +0000 UTC m=+259.163641735" Jan 22 10:29:40 crc kubenswrapper[4752]: I0122 10:29:40.099360 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 22 10:29:40 crc kubenswrapper[4752]: I0122 10:29:40.140849 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wnsq8"] Jan 22 10:29:40 crc kubenswrapper[4752]: I0122 10:29:40.141164 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-wnsq8" podUID="fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d" containerName="controller-manager" containerID="cri-o://4187d4e1fb622485692793f77e6449dbf898365af2feaed629419f5cd16a3d48" gracePeriod=30 Jan 22 10:29:40 crc kubenswrapper[4752]: I0122 10:29:40.233533 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4t27t"] Jan 22 10:29:40 crc kubenswrapper[4752]: I0122 10:29:40.610356 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wnsq8" Jan 22 10:29:40 crc kubenswrapper[4752]: I0122 10:29:40.790568 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppjlw\" (UniqueName: \"kubernetes.io/projected/fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d-kube-api-access-ppjlw\") pod \"fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d\" (UID: \"fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d\") " Jan 22 10:29:40 crc kubenswrapper[4752]: I0122 10:29:40.790700 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d-client-ca\") pod \"fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d\" (UID: \"fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d\") " Jan 22 10:29:40 crc kubenswrapper[4752]: I0122 10:29:40.790739 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d-proxy-ca-bundles\") pod \"fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d\" (UID: \"fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d\") " Jan 22 10:29:40 crc kubenswrapper[4752]: I0122 10:29:40.790912 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d-serving-cert\") pod \"fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d\" (UID: \"fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d\") " Jan 22 10:29:40 crc kubenswrapper[4752]: I0122 10:29:40.790957 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d-config\") pod \"fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d\" (UID: \"fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d\") " Jan 22 10:29:40 crc kubenswrapper[4752]: I0122 10:29:40.791780 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d" (UID: "fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:29:40 crc kubenswrapper[4752]: I0122 10:29:40.791923 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d-client-ca" (OuterVolumeSpecName: "client-ca") pod "fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d" (UID: "fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:29:40 crc kubenswrapper[4752]: I0122 10:29:40.791967 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d-config" (OuterVolumeSpecName: "config") pod "fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d" (UID: "fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:29:40 crc kubenswrapper[4752]: I0122 10:29:40.798933 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d" (UID: "fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:29:40 crc kubenswrapper[4752]: I0122 10:29:40.799062 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d-kube-api-access-ppjlw" (OuterVolumeSpecName: "kube-api-access-ppjlw") pod "fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d" (UID: "fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d"). InnerVolumeSpecName "kube-api-access-ppjlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:29:40 crc kubenswrapper[4752]: I0122 10:29:40.866513 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44kl8" event={"ID":"d494f5bf-f094-40ab-8edb-8205291118fb","Type":"ContainerStarted","Data":"8a8251c5288aaed3b67f2a6953cc635cd1521cce40865852c36a4cd38e6f66a7"} Jan 22 10:29:40 crc kubenswrapper[4752]: I0122 10:29:40.869593 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cg62z" event={"ID":"6bda21db-8dc2-405f-bf37-7e60cc78a98b","Type":"ContainerStarted","Data":"4dd95ea48cff207ec4b6e79f4dabedb4fb3ca48f9717444331e0c688f14bbd9a"} Jan 22 10:29:40 crc kubenswrapper[4752]: I0122 10:29:40.871422 4752 generic.go:334] "Generic (PLEG): container finished" podID="fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d" containerID="4187d4e1fb622485692793f77e6449dbf898365af2feaed629419f5cd16a3d48" exitCode=0 Jan 22 10:29:40 crc kubenswrapper[4752]: I0122 10:29:40.871499 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wnsq8" event={"ID":"fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d","Type":"ContainerDied","Data":"4187d4e1fb622485692793f77e6449dbf898365af2feaed629419f5cd16a3d48"} Jan 22 10:29:40 crc kubenswrapper[4752]: I0122 10:29:40.871515 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wnsq8" event={"ID":"fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d","Type":"ContainerDied","Data":"d15035fdcb7027328fcbc6342611d2706887e7c04f2a9eeaf902294558e344b8"} Jan 22 10:29:40 crc kubenswrapper[4752]: I0122 10:29:40.871533 4752 scope.go:117] "RemoveContainer" containerID="4187d4e1fb622485692793f77e6449dbf898365af2feaed629419f5cd16a3d48" Jan 22 10:29:40 crc kubenswrapper[4752]: I0122 10:29:40.871849 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wnsq8" Jan 22 10:29:40 crc kubenswrapper[4752]: I0122 10:29:40.873739 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4t27t" podUID="622b1b03-6c1d-460c-ac51-10046c682195" containerName="route-controller-manager" containerID="cri-o://1f71eaf584091cc60b8739e02e52207db199e9f117c7b60747008392f796ed1f" gracePeriod=30 Jan 22 10:29:40 crc kubenswrapper[4752]: I0122 10:29:40.892135 4752 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:29:40 crc kubenswrapper[4752]: I0122 10:29:40.892664 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:29:40 crc kubenswrapper[4752]: I0122 10:29:40.892813 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppjlw\" (UniqueName: \"kubernetes.io/projected/fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d-kube-api-access-ppjlw\") on node \"crc\" DevicePath \"\"" Jan 22 10:29:40 crc kubenswrapper[4752]: I0122 10:29:40.892829 4752 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:29:40 crc kubenswrapper[4752]: I0122 10:29:40.892838 4752 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 10:29:40 crc kubenswrapper[4752]: I0122 10:29:40.894048 4752 scope.go:117] "RemoveContainer" containerID="4187d4e1fb622485692793f77e6449dbf898365af2feaed629419f5cd16a3d48" Jan 22 10:29:40 crc kubenswrapper[4752]: E0122 10:29:40.894641 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4187d4e1fb622485692793f77e6449dbf898365af2feaed629419f5cd16a3d48\": container with ID starting with 4187d4e1fb622485692793f77e6449dbf898365af2feaed629419f5cd16a3d48 not found: ID does not exist" containerID="4187d4e1fb622485692793f77e6449dbf898365af2feaed629419f5cd16a3d48" Jan 22 10:29:40 crc kubenswrapper[4752]: I0122 10:29:40.894682 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4187d4e1fb622485692793f77e6449dbf898365af2feaed629419f5cd16a3d48"} err="failed to get container status \"4187d4e1fb622485692793f77e6449dbf898365af2feaed629419f5cd16a3d48\": rpc error: code = NotFound desc = could not find container \"4187d4e1fb622485692793f77e6449dbf898365af2feaed629419f5cd16a3d48\": container with ID starting with 4187d4e1fb622485692793f77e6449dbf898365af2feaed629419f5cd16a3d48 not found: ID does not exist" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.028782 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wnsq8"] Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.043707 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wnsq8"] Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.138036 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d" path="/var/lib/kubelet/pods/fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d/volumes" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.269221 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4t27t" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.425258 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/622b1b03-6c1d-460c-ac51-10046c682195-config\") pod \"622b1b03-6c1d-460c-ac51-10046c682195\" (UID: \"622b1b03-6c1d-460c-ac51-10046c682195\") " Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.425765 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/622b1b03-6c1d-460c-ac51-10046c682195-client-ca\") pod \"622b1b03-6c1d-460c-ac51-10046c682195\" (UID: \"622b1b03-6c1d-460c-ac51-10046c682195\") " Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.426060 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fwnx\" (UniqueName: \"kubernetes.io/projected/622b1b03-6c1d-460c-ac51-10046c682195-kube-api-access-8fwnx\") pod \"622b1b03-6c1d-460c-ac51-10046c682195\" (UID: \"622b1b03-6c1d-460c-ac51-10046c682195\") " Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.426165 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/622b1b03-6c1d-460c-ac51-10046c682195-serving-cert\") pod \"622b1b03-6c1d-460c-ac51-10046c682195\" (UID: \"622b1b03-6c1d-460c-ac51-10046c682195\") " Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.426655 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622b1b03-6c1d-460c-ac51-10046c682195-client-ca" (OuterVolumeSpecName: "client-ca") pod "622b1b03-6c1d-460c-ac51-10046c682195" (UID: "622b1b03-6c1d-460c-ac51-10046c682195"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.427839 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622b1b03-6c1d-460c-ac51-10046c682195-config" (OuterVolumeSpecName: "config") pod "622b1b03-6c1d-460c-ac51-10046c682195" (UID: "622b1b03-6c1d-460c-ac51-10046c682195"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.431361 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/622b1b03-6c1d-460c-ac51-10046c682195-kube-api-access-8fwnx" (OuterVolumeSpecName: "kube-api-access-8fwnx") pod "622b1b03-6c1d-460c-ac51-10046c682195" (UID: "622b1b03-6c1d-460c-ac51-10046c682195"). InnerVolumeSpecName "kube-api-access-8fwnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.433444 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/622b1b03-6c1d-460c-ac51-10046c682195-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "622b1b03-6c1d-460c-ac51-10046c682195" (UID: "622b1b03-6c1d-460c-ac51-10046c682195"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.471788 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6567888b4c-4s6wh"] Jan 22 10:29:41 crc kubenswrapper[4752]: E0122 10:29:41.473136 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="622b1b03-6c1d-460c-ac51-10046c682195" containerName="route-controller-manager" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.473159 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="622b1b03-6c1d-460c-ac51-10046c682195" containerName="route-controller-manager" Jan 22 10:29:41 crc kubenswrapper[4752]: E0122 10:29:41.473174 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d" containerName="controller-manager" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.473179 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d" containerName="controller-manager" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.473284 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="622b1b03-6c1d-460c-ac51-10046c682195" containerName="route-controller-manager" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.473303 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad3a088-8bfc-4fd4-9e82-65d5c43c3f6d" containerName="controller-manager" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.473648 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6567888b4c-4s6wh" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.476987 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.477810 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.478202 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.478382 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.479309 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.480035 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.488365 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6567888b4c-4s6wh"] Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.489274 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.528489 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f34c644-a730-4275-9509-dc72ce0b722c-config\") pod \"controller-manager-6567888b4c-4s6wh\" (UID: \"8f34c644-a730-4275-9509-dc72ce0b722c\") " pod="openshift-controller-manager/controller-manager-6567888b4c-4s6wh" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.528590 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvzqc\" (UniqueName: \"kubernetes.io/projected/8f34c644-a730-4275-9509-dc72ce0b722c-kube-api-access-lvzqc\") pod \"controller-manager-6567888b4c-4s6wh\" (UID: \"8f34c644-a730-4275-9509-dc72ce0b722c\") " pod="openshift-controller-manager/controller-manager-6567888b4c-4s6wh" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.528624 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f34c644-a730-4275-9509-dc72ce0b722c-serving-cert\") pod \"controller-manager-6567888b4c-4s6wh\" (UID: \"8f34c644-a730-4275-9509-dc72ce0b722c\") " pod="openshift-controller-manager/controller-manager-6567888b4c-4s6wh" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.528648 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f34c644-a730-4275-9509-dc72ce0b722c-client-ca\") pod \"controller-manager-6567888b4c-4s6wh\" (UID: \"8f34c644-a730-4275-9509-dc72ce0b722c\") " pod="openshift-controller-manager/controller-manager-6567888b4c-4s6wh" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.528686 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f34c644-a730-4275-9509-dc72ce0b722c-proxy-ca-bundles\") pod \"controller-manager-6567888b4c-4s6wh\" (UID: \"8f34c644-a730-4275-9509-dc72ce0b722c\") " pod="openshift-controller-manager/controller-manager-6567888b4c-4s6wh" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.528734 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fwnx\" (UniqueName: \"kubernetes.io/projected/622b1b03-6c1d-460c-ac51-10046c682195-kube-api-access-8fwnx\") on node \"crc\" DevicePath \"\"" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.528750 4752 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/622b1b03-6c1d-460c-ac51-10046c682195-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.528763 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/622b1b03-6c1d-460c-ac51-10046c682195-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.528773 4752 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/622b1b03-6c1d-460c-ac51-10046c682195-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.629655 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvzqc\" (UniqueName: \"kubernetes.io/projected/8f34c644-a730-4275-9509-dc72ce0b722c-kube-api-access-lvzqc\") pod \"controller-manager-6567888b4c-4s6wh\" (UID: \"8f34c644-a730-4275-9509-dc72ce0b722c\") " pod="openshift-controller-manager/controller-manager-6567888b4c-4s6wh" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.629724 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f34c644-a730-4275-9509-dc72ce0b722c-serving-cert\") pod \"controller-manager-6567888b4c-4s6wh\" (UID: \"8f34c644-a730-4275-9509-dc72ce0b722c\") " pod="openshift-controller-manager/controller-manager-6567888b4c-4s6wh" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.629747 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f34c644-a730-4275-9509-dc72ce0b722c-client-ca\") pod \"controller-manager-6567888b4c-4s6wh\" (UID: \"8f34c644-a730-4275-9509-dc72ce0b722c\") " pod="openshift-controller-manager/controller-manager-6567888b4c-4s6wh" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.629782 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f34c644-a730-4275-9509-dc72ce0b722c-proxy-ca-bundles\") pod \"controller-manager-6567888b4c-4s6wh\" (UID: \"8f34c644-a730-4275-9509-dc72ce0b722c\") " pod="openshift-controller-manager/controller-manager-6567888b4c-4s6wh" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.629807 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f34c644-a730-4275-9509-dc72ce0b722c-config\") pod \"controller-manager-6567888b4c-4s6wh\" (UID: \"8f34c644-a730-4275-9509-dc72ce0b722c\") " pod="openshift-controller-manager/controller-manager-6567888b4c-4s6wh" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.632087 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f34c644-a730-4275-9509-dc72ce0b722c-client-ca\") pod \"controller-manager-6567888b4c-4s6wh\" (UID: \"8f34c644-a730-4275-9509-dc72ce0b722c\") " pod="openshift-controller-manager/controller-manager-6567888b4c-4s6wh" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.632088 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f34c644-a730-4275-9509-dc72ce0b722c-proxy-ca-bundles\") pod \"controller-manager-6567888b4c-4s6wh\" (UID: \"8f34c644-a730-4275-9509-dc72ce0b722c\") " pod="openshift-controller-manager/controller-manager-6567888b4c-4s6wh" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.632164 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f34c644-a730-4275-9509-dc72ce0b722c-config\") pod \"controller-manager-6567888b4c-4s6wh\" (UID: \"8f34c644-a730-4275-9509-dc72ce0b722c\") " pod="openshift-controller-manager/controller-manager-6567888b4c-4s6wh" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.639520 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f34c644-a730-4275-9509-dc72ce0b722c-serving-cert\") pod \"controller-manager-6567888b4c-4s6wh\" (UID: \"8f34c644-a730-4275-9509-dc72ce0b722c\") " pod="openshift-controller-manager/controller-manager-6567888b4c-4s6wh" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.650308 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvzqc\" (UniqueName: \"kubernetes.io/projected/8f34c644-a730-4275-9509-dc72ce0b722c-kube-api-access-lvzqc\") pod \"controller-manager-6567888b4c-4s6wh\" (UID: \"8f34c644-a730-4275-9509-dc72ce0b722c\") " pod="openshift-controller-manager/controller-manager-6567888b4c-4s6wh" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.877436 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6567888b4c-4s6wh" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.878640 4752 generic.go:334] "Generic (PLEG): container finished" podID="6bda21db-8dc2-405f-bf37-7e60cc78a98b" containerID="4dd95ea48cff207ec4b6e79f4dabedb4fb3ca48f9717444331e0c688f14bbd9a" exitCode=0 Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.878725 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cg62z" event={"ID":"6bda21db-8dc2-405f-bf37-7e60cc78a98b","Type":"ContainerDied","Data":"4dd95ea48cff207ec4b6e79f4dabedb4fb3ca48f9717444331e0c688f14bbd9a"} Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.881887 4752 generic.go:334] "Generic (PLEG): container finished" podID="d494f5bf-f094-40ab-8edb-8205291118fb" containerID="8a8251c5288aaed3b67f2a6953cc635cd1521cce40865852c36a4cd38e6f66a7" exitCode=0 Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.882199 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44kl8" event={"ID":"d494f5bf-f094-40ab-8edb-8205291118fb","Type":"ContainerDied","Data":"8a8251c5288aaed3b67f2a6953cc635cd1521cce40865852c36a4cd38e6f66a7"} Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.884240 4752 generic.go:334] "Generic (PLEG): container finished" podID="622b1b03-6c1d-460c-ac51-10046c682195" containerID="1f71eaf584091cc60b8739e02e52207db199e9f117c7b60747008392f796ed1f" exitCode=0 Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.884296 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4t27t" event={"ID":"622b1b03-6c1d-460c-ac51-10046c682195","Type":"ContainerDied","Data":"1f71eaf584091cc60b8739e02e52207db199e9f117c7b60747008392f796ed1f"} Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.884348 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4t27t" event={"ID":"622b1b03-6c1d-460c-ac51-10046c682195","Type":"ContainerDied","Data":"0532f52fe3fd9674f50916d0bc0e1da3f9a78bf465d53c424e67d889474cd025"} Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.884371 4752 scope.go:117] "RemoveContainer" containerID="1f71eaf584091cc60b8739e02e52207db199e9f117c7b60747008392f796ed1f" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.884308 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4t27t" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.931839 4752 scope.go:117] "RemoveContainer" containerID="1f71eaf584091cc60b8739e02e52207db199e9f117c7b60747008392f796ed1f" Jan 22 10:29:41 crc kubenswrapper[4752]: E0122 10:29:41.932839 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f71eaf584091cc60b8739e02e52207db199e9f117c7b60747008392f796ed1f\": container with ID starting with 1f71eaf584091cc60b8739e02e52207db199e9f117c7b60747008392f796ed1f not found: ID does not exist" containerID="1f71eaf584091cc60b8739e02e52207db199e9f117c7b60747008392f796ed1f" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.932964 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f71eaf584091cc60b8739e02e52207db199e9f117c7b60747008392f796ed1f"} err="failed to get container status \"1f71eaf584091cc60b8739e02e52207db199e9f117c7b60747008392f796ed1f\": rpc error: code = NotFound desc = could not find container \"1f71eaf584091cc60b8739e02e52207db199e9f117c7b60747008392f796ed1f\": container with ID starting with 1f71eaf584091cc60b8739e02e52207db199e9f117c7b60747008392f796ed1f not found: ID does not exist" Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.954825 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4t27t"] Jan 22 10:29:41 crc kubenswrapper[4752]: I0122 10:29:41.958445 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4t27t"] Jan 22 10:29:42 crc kubenswrapper[4752]: I0122 10:29:42.104420 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6567888b4c-4s6wh"] Jan 22 10:29:42 crc kubenswrapper[4752]: W0122 10:29:42.111374 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f34c644_a730_4275_9509_dc72ce0b722c.slice/crio-562edf1270aeec98fe9c00f8667c4e06553ff340c9a30471ea410ba12a171343 WatchSource:0}: Error finding container 562edf1270aeec98fe9c00f8667c4e06553ff340c9a30471ea410ba12a171343: Status 404 returned error can't find the container with id 562edf1270aeec98fe9c00f8667c4e06553ff340c9a30471ea410ba12a171343 Jan 22 10:29:42 crc kubenswrapper[4752]: I0122 10:29:42.242474 4752 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-4t27t container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 10:29:42 crc kubenswrapper[4752]: I0122 10:29:42.242960 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4t27t" podUID="622b1b03-6c1d-460c-ac51-10046c682195" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 10:29:42 crc kubenswrapper[4752]: I0122 10:29:42.474001 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b8bcf65d7-wfjm4"] Jan 22 10:29:42 crc kubenswrapper[4752]: I0122 10:29:42.474803 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b8bcf65d7-wfjm4" Jan 22 10:29:42 crc kubenswrapper[4752]: I0122 10:29:42.477381 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 10:29:42 crc kubenswrapper[4752]: I0122 10:29:42.477536 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 10:29:42 crc kubenswrapper[4752]: I0122 10:29:42.477704 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 10:29:42 crc kubenswrapper[4752]: I0122 10:29:42.478698 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 10:29:42 crc kubenswrapper[4752]: I0122 10:29:42.479475 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 10:29:42 crc kubenswrapper[4752]: I0122 10:29:42.483505 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 10:29:42 crc kubenswrapper[4752]: I0122 10:29:42.487834 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b8bcf65d7-wfjm4"] Jan 22 10:29:42 crc kubenswrapper[4752]: I0122 10:29:42.643232 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9650a8a-549c-4452-8b6b-59e4d7d4aacb-config\") pod \"route-controller-manager-7b8bcf65d7-wfjm4\" (UID: \"d9650a8a-549c-4452-8b6b-59e4d7d4aacb\") " pod="openshift-route-controller-manager/route-controller-manager-7b8bcf65d7-wfjm4" Jan 22 10:29:42 crc kubenswrapper[4752]: I0122 10:29:42.643324 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkthv\" (UniqueName: \"kubernetes.io/projected/d9650a8a-549c-4452-8b6b-59e4d7d4aacb-kube-api-access-vkthv\") pod \"route-controller-manager-7b8bcf65d7-wfjm4\" (UID: \"d9650a8a-549c-4452-8b6b-59e4d7d4aacb\") " pod="openshift-route-controller-manager/route-controller-manager-7b8bcf65d7-wfjm4" Jan 22 10:29:42 crc kubenswrapper[4752]: I0122 10:29:42.643362 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9650a8a-549c-4452-8b6b-59e4d7d4aacb-serving-cert\") pod \"route-controller-manager-7b8bcf65d7-wfjm4\" (UID: \"d9650a8a-549c-4452-8b6b-59e4d7d4aacb\") " pod="openshift-route-controller-manager/route-controller-manager-7b8bcf65d7-wfjm4" Jan 22 10:29:42 crc kubenswrapper[4752]: I0122 10:29:42.643962 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9650a8a-549c-4452-8b6b-59e4d7d4aacb-client-ca\") pod \"route-controller-manager-7b8bcf65d7-wfjm4\" (UID: \"d9650a8a-549c-4452-8b6b-59e4d7d4aacb\") " pod="openshift-route-controller-manager/route-controller-manager-7b8bcf65d7-wfjm4" Jan 22 10:29:42 crc kubenswrapper[4752]: I0122 10:29:42.745843 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9650a8a-549c-4452-8b6b-59e4d7d4aacb-config\") pod \"route-controller-manager-7b8bcf65d7-wfjm4\" (UID: \"d9650a8a-549c-4452-8b6b-59e4d7d4aacb\") " pod="openshift-route-controller-manager/route-controller-manager-7b8bcf65d7-wfjm4" Jan 22 10:29:42 crc kubenswrapper[4752]: I0122 10:29:42.745948 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkthv\" (UniqueName: \"kubernetes.io/projected/d9650a8a-549c-4452-8b6b-59e4d7d4aacb-kube-api-access-vkthv\") pod \"route-controller-manager-7b8bcf65d7-wfjm4\" (UID: \"d9650a8a-549c-4452-8b6b-59e4d7d4aacb\") " pod="openshift-route-controller-manager/route-controller-manager-7b8bcf65d7-wfjm4" Jan 22 10:29:42 crc kubenswrapper[4752]: I0122 10:29:42.745983 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9650a8a-549c-4452-8b6b-59e4d7d4aacb-serving-cert\") pod \"route-controller-manager-7b8bcf65d7-wfjm4\" (UID: \"d9650a8a-549c-4452-8b6b-59e4d7d4aacb\") " pod="openshift-route-controller-manager/route-controller-manager-7b8bcf65d7-wfjm4" Jan 22 10:29:42 crc kubenswrapper[4752]: I0122 10:29:42.746030 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9650a8a-549c-4452-8b6b-59e4d7d4aacb-client-ca\") pod \"route-controller-manager-7b8bcf65d7-wfjm4\" (UID: \"d9650a8a-549c-4452-8b6b-59e4d7d4aacb\") " pod="openshift-route-controller-manager/route-controller-manager-7b8bcf65d7-wfjm4" Jan 22 10:29:42 crc kubenswrapper[4752]: I0122 10:29:42.747236 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9650a8a-549c-4452-8b6b-59e4d7d4aacb-client-ca\") pod \"route-controller-manager-7b8bcf65d7-wfjm4\" (UID: \"d9650a8a-549c-4452-8b6b-59e4d7d4aacb\") " pod="openshift-route-controller-manager/route-controller-manager-7b8bcf65d7-wfjm4" Jan 22 10:29:42 crc kubenswrapper[4752]: I0122 10:29:42.747445 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9650a8a-549c-4452-8b6b-59e4d7d4aacb-config\") pod \"route-controller-manager-7b8bcf65d7-wfjm4\" (UID: \"d9650a8a-549c-4452-8b6b-59e4d7d4aacb\") " pod="openshift-route-controller-manager/route-controller-manager-7b8bcf65d7-wfjm4" Jan 22 10:29:42 crc kubenswrapper[4752]: I0122 10:29:42.757689 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9650a8a-549c-4452-8b6b-59e4d7d4aacb-serving-cert\") pod \"route-controller-manager-7b8bcf65d7-wfjm4\" (UID: \"d9650a8a-549c-4452-8b6b-59e4d7d4aacb\") " pod="openshift-route-controller-manager/route-controller-manager-7b8bcf65d7-wfjm4" Jan 22 10:29:42 crc kubenswrapper[4752]: I0122 10:29:42.768737 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkthv\" (UniqueName: \"kubernetes.io/projected/d9650a8a-549c-4452-8b6b-59e4d7d4aacb-kube-api-access-vkthv\") pod \"route-controller-manager-7b8bcf65d7-wfjm4\" (UID: \"d9650a8a-549c-4452-8b6b-59e4d7d4aacb\") " pod="openshift-route-controller-manager/route-controller-manager-7b8bcf65d7-wfjm4" Jan 22 10:29:42 crc kubenswrapper[4752]: I0122 10:29:42.793193 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b8bcf65d7-wfjm4" Jan 22 10:29:42 crc kubenswrapper[4752]: I0122 10:29:42.892254 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6567888b4c-4s6wh" event={"ID":"8f34c644-a730-4275-9509-dc72ce0b722c","Type":"ContainerStarted","Data":"3e29bd8fdf47624af4f590817f36e1267a23669fd2c4acb3e3df850202dc9506"} Jan 22 10:29:42 crc kubenswrapper[4752]: I0122 10:29:42.892293 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6567888b4c-4s6wh" event={"ID":"8f34c644-a730-4275-9509-dc72ce0b722c","Type":"ContainerStarted","Data":"562edf1270aeec98fe9c00f8667c4e06553ff340c9a30471ea410ba12a171343"} Jan 22 10:29:42 crc kubenswrapper[4752]: I0122 10:29:42.893490 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6567888b4c-4s6wh" Jan 22 10:29:42 crc kubenswrapper[4752]: I0122 10:29:42.903377 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6567888b4c-4s6wh" Jan 22 10:29:42 crc kubenswrapper[4752]: I0122 10:29:42.936925 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6567888b4c-4s6wh" podStartSLOduration=2.936891493 podStartE2EDuration="2.936891493s" podCreationTimestamp="2026-01-22 10:29:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:29:42.915095594 +0000 UTC m=+262.145038592" watchObservedRunningTime="2026-01-22 10:29:42.936891493 +0000 UTC m=+262.166834411" Jan 22 10:29:43 crc kubenswrapper[4752]: I0122 10:29:43.110054 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="622b1b03-6c1d-460c-ac51-10046c682195" path="/var/lib/kubelet/pods/622b1b03-6c1d-460c-ac51-10046c682195/volumes" Jan 22 10:29:43 crc kubenswrapper[4752]: I0122 10:29:43.256566 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b8bcf65d7-wfjm4"] Jan 22 10:29:43 crc kubenswrapper[4752]: W0122 10:29:43.256609 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9650a8a_549c_4452_8b6b_59e4d7d4aacb.slice/crio-8fd3324fffd3fee11f88119feb776244bcdf9099b5b5aad8fe0f372001958d69 WatchSource:0}: Error finding container 8fd3324fffd3fee11f88119feb776244bcdf9099b5b5aad8fe0f372001958d69: Status 404 returned error can't find the container with id 8fd3324fffd3fee11f88119feb776244bcdf9099b5b5aad8fe0f372001958d69 Jan 22 10:29:43 crc kubenswrapper[4752]: I0122 10:29:43.900768 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b8bcf65d7-wfjm4" event={"ID":"d9650a8a-549c-4452-8b6b-59e4d7d4aacb","Type":"ContainerStarted","Data":"7bf5adda3522f60d14274339133f01bd07c4c6ae4e544d7bba3342e3e92c903c"} Jan 22 10:29:43 crc kubenswrapper[4752]: I0122 10:29:43.901242 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7b8bcf65d7-wfjm4" Jan 22 10:29:43 crc kubenswrapper[4752]: I0122 10:29:43.901258 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b8bcf65d7-wfjm4" event={"ID":"d9650a8a-549c-4452-8b6b-59e4d7d4aacb","Type":"ContainerStarted","Data":"8fd3324fffd3fee11f88119feb776244bcdf9099b5b5aad8fe0f372001958d69"} Jan 22 10:29:43 crc kubenswrapper[4752]: I0122 10:29:43.907487 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cg62z" event={"ID":"6bda21db-8dc2-405f-bf37-7e60cc78a98b","Type":"ContainerStarted","Data":"e13e273da9358fa89610712441eee50b59ce8b2edd8b4c4a4a0f0e9b77ddf3a5"} Jan 22 10:29:43 crc kubenswrapper[4752]: I0122 10:29:43.912464 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7b8bcf65d7-wfjm4" Jan 22 10:29:43 crc kubenswrapper[4752]: I0122 10:29:43.923442 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7b8bcf65d7-wfjm4" podStartSLOduration=3.923424846 podStartE2EDuration="3.923424846s" podCreationTimestamp="2026-01-22 10:29:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:29:43.920545104 +0000 UTC m=+263.150488012" watchObservedRunningTime="2026-01-22 10:29:43.923424846 +0000 UTC m=+263.153367754" Jan 22 10:29:43 crc kubenswrapper[4752]: I0122 10:29:43.945410 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cg62z" podStartSLOduration=2.832057215 podStartE2EDuration="5.94539154s" podCreationTimestamp="2026-01-22 10:29:38 +0000 UTC" firstStartedPulling="2026-01-22 10:29:39.847285672 +0000 UTC m=+259.077228590" lastFinishedPulling="2026-01-22 10:29:42.960620007 +0000 UTC m=+262.190562915" observedRunningTime="2026-01-22 10:29:43.941641334 +0000 UTC m=+263.171584242" watchObservedRunningTime="2026-01-22 10:29:43.94539154 +0000 UTC m=+263.175334448" Jan 22 10:29:44 crc kubenswrapper[4752]: I0122 10:29:44.520316 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" podUID="4140f15a-5e23-431b-ad69-a64d54325d19" containerName="registry" containerID="cri-o://67941ce38fc724f60f199c654f4c5362ee74333ceb514e71434425b76eefe6eb" gracePeriod=30 Jan 22 10:29:44 crc kubenswrapper[4752]: I0122 10:29:44.882996 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:29:44 crc kubenswrapper[4752]: I0122 10:29:44.925569 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44kl8" event={"ID":"d494f5bf-f094-40ab-8edb-8205291118fb","Type":"ContainerStarted","Data":"107a913165d9ee9bcc481ce3362829965622e8708f75a3134d14544992c0c3ee"} Jan 22 10:29:44 crc kubenswrapper[4752]: I0122 10:29:44.928239 4752 generic.go:334] "Generic (PLEG): container finished" podID="4140f15a-5e23-431b-ad69-a64d54325d19" containerID="67941ce38fc724f60f199c654f4c5362ee74333ceb514e71434425b76eefe6eb" exitCode=0 Jan 22 10:29:44 crc kubenswrapper[4752]: I0122 10:29:44.928290 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" Jan 22 10:29:44 crc kubenswrapper[4752]: I0122 10:29:44.928327 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" event={"ID":"4140f15a-5e23-431b-ad69-a64d54325d19","Type":"ContainerDied","Data":"67941ce38fc724f60f199c654f4c5362ee74333ceb514e71434425b76eefe6eb"} Jan 22 10:29:44 crc kubenswrapper[4752]: I0122 10:29:44.928352 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xn8dz" event={"ID":"4140f15a-5e23-431b-ad69-a64d54325d19","Type":"ContainerDied","Data":"525156f70e1202a8970960c98238a12e0f86e1762310a9a710448f0c18d66168"} Jan 22 10:29:44 crc kubenswrapper[4752]: I0122 10:29:44.928414 4752 scope.go:117] "RemoveContainer" containerID="67941ce38fc724f60f199c654f4c5362ee74333ceb514e71434425b76eefe6eb" Jan 22 10:29:44 crc kubenswrapper[4752]: I0122 10:29:44.944391 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-44kl8" podStartSLOduration=3.26370169 podStartE2EDuration="6.944366477s" podCreationTimestamp="2026-01-22 10:29:38 +0000 UTC" firstStartedPulling="2026-01-22 10:29:39.857675557 +0000 UTC m=+259.087618465" lastFinishedPulling="2026-01-22 10:29:43.538340304 +0000 UTC m=+262.768283252" observedRunningTime="2026-01-22 10:29:44.942224646 +0000 UTC m=+264.172167554" watchObservedRunningTime="2026-01-22 10:29:44.944366477 +0000 UTC m=+264.174309385" Jan 22 10:29:44 crc kubenswrapper[4752]: I0122 10:29:44.965569 4752 scope.go:117] "RemoveContainer" containerID="67941ce38fc724f60f199c654f4c5362ee74333ceb514e71434425b76eefe6eb" Jan 22 10:29:44 crc kubenswrapper[4752]: E0122 10:29:44.966005 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67941ce38fc724f60f199c654f4c5362ee74333ceb514e71434425b76eefe6eb\": container with ID starting with 67941ce38fc724f60f199c654f4c5362ee74333ceb514e71434425b76eefe6eb not found: ID does not exist" containerID="67941ce38fc724f60f199c654f4c5362ee74333ceb514e71434425b76eefe6eb" Jan 22 10:29:44 crc kubenswrapper[4752]: I0122 10:29:44.966037 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67941ce38fc724f60f199c654f4c5362ee74333ceb514e71434425b76eefe6eb"} err="failed to get container status \"67941ce38fc724f60f199c654f4c5362ee74333ceb514e71434425b76eefe6eb\": rpc error: code = NotFound desc = could not find container \"67941ce38fc724f60f199c654f4c5362ee74333ceb514e71434425b76eefe6eb\": container with ID starting with 67941ce38fc724f60f199c654f4c5362ee74333ceb514e71434425b76eefe6eb not found: ID does not exist" Jan 22 10:29:44 crc kubenswrapper[4752]: I0122 10:29:44.980944 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4140f15a-5e23-431b-ad69-a64d54325d19-trusted-ca\") pod \"4140f15a-5e23-431b-ad69-a64d54325d19\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " Jan 22 10:29:44 crc kubenswrapper[4752]: I0122 10:29:44.981019 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4140f15a-5e23-431b-ad69-a64d54325d19-installation-pull-secrets\") pod \"4140f15a-5e23-431b-ad69-a64d54325d19\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " Jan 22 10:29:44 crc kubenswrapper[4752]: I0122 10:29:44.981048 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4140f15a-5e23-431b-ad69-a64d54325d19-registry-certificates\") pod \"4140f15a-5e23-431b-ad69-a64d54325d19\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " Jan 22 10:29:44 crc kubenswrapper[4752]: I0122 10:29:44.981079 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4140f15a-5e23-431b-ad69-a64d54325d19-ca-trust-extracted\") pod \"4140f15a-5e23-431b-ad69-a64d54325d19\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " Jan 22 10:29:44 crc kubenswrapper[4752]: I0122 10:29:44.981098 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9k5jc\" (UniqueName: \"kubernetes.io/projected/4140f15a-5e23-431b-ad69-a64d54325d19-kube-api-access-9k5jc\") pod \"4140f15a-5e23-431b-ad69-a64d54325d19\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " Jan 22 10:29:44 crc kubenswrapper[4752]: I0122 10:29:44.981133 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4140f15a-5e23-431b-ad69-a64d54325d19-registry-tls\") pod \"4140f15a-5e23-431b-ad69-a64d54325d19\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " Jan 22 10:29:44 crc kubenswrapper[4752]: I0122 10:29:44.981331 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"4140f15a-5e23-431b-ad69-a64d54325d19\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " Jan 22 10:29:44 crc kubenswrapper[4752]: I0122 10:29:44.983074 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4140f15a-5e23-431b-ad69-a64d54325d19-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "4140f15a-5e23-431b-ad69-a64d54325d19" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:29:44 crc kubenswrapper[4752]: I0122 10:29:44.983098 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4140f15a-5e23-431b-ad69-a64d54325d19-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "4140f15a-5e23-431b-ad69-a64d54325d19" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:29:44 crc kubenswrapper[4752]: I0122 10:29:44.988215 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4140f15a-5e23-431b-ad69-a64d54325d19-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "4140f15a-5e23-431b-ad69-a64d54325d19" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:29:44 crc kubenswrapper[4752]: I0122 10:29:44.988577 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4140f15a-5e23-431b-ad69-a64d54325d19-kube-api-access-9k5jc" (OuterVolumeSpecName: "kube-api-access-9k5jc") pod "4140f15a-5e23-431b-ad69-a64d54325d19" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19"). InnerVolumeSpecName "kube-api-access-9k5jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:29:44 crc kubenswrapper[4752]: I0122 10:29:44.992203 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4140f15a-5e23-431b-ad69-a64d54325d19-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "4140f15a-5e23-431b-ad69-a64d54325d19" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:29:44 crc kubenswrapper[4752]: I0122 10:29:44.995526 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "4140f15a-5e23-431b-ad69-a64d54325d19" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 10:29:45 crc kubenswrapper[4752]: I0122 10:29:45.001758 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4140f15a-5e23-431b-ad69-a64d54325d19-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "4140f15a-5e23-431b-ad69-a64d54325d19" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:29:45 crc kubenswrapper[4752]: I0122 10:29:45.082141 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4140f15a-5e23-431b-ad69-a64d54325d19-bound-sa-token\") pod \"4140f15a-5e23-431b-ad69-a64d54325d19\" (UID: \"4140f15a-5e23-431b-ad69-a64d54325d19\") " Jan 22 10:29:45 crc kubenswrapper[4752]: I0122 10:29:45.083267 4752 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4140f15a-5e23-431b-ad69-a64d54325d19-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:29:45 crc kubenswrapper[4752]: I0122 10:29:45.083406 4752 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4140f15a-5e23-431b-ad69-a64d54325d19-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 22 10:29:45 crc kubenswrapper[4752]: I0122 10:29:45.083466 4752 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4140f15a-5e23-431b-ad69-a64d54325d19-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 22 10:29:45 crc kubenswrapper[4752]: I0122 10:29:45.083495 4752 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4140f15a-5e23-431b-ad69-a64d54325d19-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 22 10:29:45 crc kubenswrapper[4752]: I0122 10:29:45.083511 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9k5jc\" (UniqueName: \"kubernetes.io/projected/4140f15a-5e23-431b-ad69-a64d54325d19-kube-api-access-9k5jc\") on node \"crc\" DevicePath \"\"" Jan 22 10:29:45 crc kubenswrapper[4752]: I0122 10:29:45.083525 4752 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4140f15a-5e23-431b-ad69-a64d54325d19-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 22 10:29:45 crc kubenswrapper[4752]: I0122 10:29:45.087409 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4140f15a-5e23-431b-ad69-a64d54325d19-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "4140f15a-5e23-431b-ad69-a64d54325d19" (UID: "4140f15a-5e23-431b-ad69-a64d54325d19"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:29:45 crc kubenswrapper[4752]: I0122 10:29:45.184739 4752 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4140f15a-5e23-431b-ad69-a64d54325d19-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 10:29:45 crc kubenswrapper[4752]: I0122 10:29:45.242508 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xn8dz"] Jan 22 10:29:45 crc kubenswrapper[4752]: I0122 10:29:45.245756 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xn8dz"] Jan 22 10:29:46 crc kubenswrapper[4752]: I0122 10:29:46.158244 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jfg7t" Jan 22 10:29:46 crc kubenswrapper[4752]: I0122 10:29:46.158634 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jfg7t" Jan 22 10:29:46 crc kubenswrapper[4752]: I0122 10:29:46.205082 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jfg7t" Jan 22 10:29:46 crc kubenswrapper[4752]: I0122 10:29:46.327924 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cdmk8" Jan 22 10:29:46 crc kubenswrapper[4752]: I0122 10:29:46.328377 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cdmk8" Jan 22 10:29:46 crc kubenswrapper[4752]: I0122 10:29:46.378245 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cdmk8" Jan 22 10:29:46 crc kubenswrapper[4752]: I0122 10:29:46.978527 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cdmk8" Jan 22 10:29:47 crc kubenswrapper[4752]: I0122 10:29:47.000525 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jfg7t" Jan 22 10:29:47 crc kubenswrapper[4752]: I0122 10:29:47.105783 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4140f15a-5e23-431b-ad69-a64d54325d19" path="/var/lib/kubelet/pods/4140f15a-5e23-431b-ad69-a64d54325d19/volumes" Jan 22 10:29:48 crc kubenswrapper[4752]: I0122 10:29:48.545441 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-44kl8" Jan 22 10:29:48 crc kubenswrapper[4752]: I0122 10:29:48.545559 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-44kl8" Jan 22 10:29:48 crc kubenswrapper[4752]: I0122 10:29:48.610047 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-44kl8" Jan 22 10:29:48 crc kubenswrapper[4752]: I0122 10:29:48.737686 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cg62z" Jan 22 10:29:48 crc kubenswrapper[4752]: I0122 10:29:48.737791 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cg62z" Jan 22 10:29:49 crc kubenswrapper[4752]: I0122 10:29:49.789097 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cg62z" podUID="6bda21db-8dc2-405f-bf37-7e60cc78a98b" containerName="registry-server" probeResult="failure" output=< Jan 22 10:29:49 crc kubenswrapper[4752]: timeout: failed to connect service ":50051" within 1s Jan 22 10:29:49 crc kubenswrapper[4752]: > Jan 22 10:29:58 crc kubenswrapper[4752]: I0122 10:29:58.611824 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-44kl8" Jan 22 10:29:58 crc kubenswrapper[4752]: I0122 10:29:58.804348 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cg62z" Jan 22 10:29:58 crc kubenswrapper[4752]: I0122 10:29:58.855009 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cg62z" Jan 22 10:30:00 crc kubenswrapper[4752]: I0122 10:30:00.132917 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6567888b4c-4s6wh"] Jan 22 10:30:03 crc kubenswrapper[4752]: I0122 10:30:00.133213 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6567888b4c-4s6wh" podUID="8f34c644-a730-4275-9509-dc72ce0b722c" containerName="controller-manager" containerID="cri-o://3e29bd8fdf47624af4f590817f36e1267a23669fd2c4acb3e3df850202dc9506" gracePeriod=30 Jan 22 10:30:03 crc kubenswrapper[4752]: I0122 10:30:00.189116 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484630-v8tfk"] Jan 22 10:30:03 crc kubenswrapper[4752]: E0122 10:30:00.189317 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4140f15a-5e23-431b-ad69-a64d54325d19" containerName="registry" Jan 22 10:30:03 crc kubenswrapper[4752]: I0122 10:30:00.189328 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="4140f15a-5e23-431b-ad69-a64d54325d19" containerName="registry" Jan 22 10:30:03 crc kubenswrapper[4752]: I0122 10:30:00.189423 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="4140f15a-5e23-431b-ad69-a64d54325d19" containerName="registry" Jan 22 10:30:03 crc kubenswrapper[4752]: I0122 10:30:00.189902 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-v8tfk" Jan 22 10:30:03 crc kubenswrapper[4752]: I0122 10:30:00.192085 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 10:30:03 crc kubenswrapper[4752]: I0122 10:30:00.192178 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 10:30:03 crc kubenswrapper[4752]: I0122 10:30:00.203941 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484630-v8tfk"] Jan 22 10:30:03 crc kubenswrapper[4752]: I0122 10:30:00.296943 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6cc1f620-50fd-495f-b9e0-7e676820eece-config-volume\") pod \"collect-profiles-29484630-v8tfk\" (UID: \"6cc1f620-50fd-495f-b9e0-7e676820eece\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-v8tfk" Jan 22 10:30:03 crc kubenswrapper[4752]: I0122 10:30:00.296997 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7cgf\" (UniqueName: \"kubernetes.io/projected/6cc1f620-50fd-495f-b9e0-7e676820eece-kube-api-access-r7cgf\") pod \"collect-profiles-29484630-v8tfk\" (UID: \"6cc1f620-50fd-495f-b9e0-7e676820eece\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-v8tfk" Jan 22 10:30:03 crc kubenswrapper[4752]: I0122 10:30:00.297054 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6cc1f620-50fd-495f-b9e0-7e676820eece-secret-volume\") pod \"collect-profiles-29484630-v8tfk\" (UID: \"6cc1f620-50fd-495f-b9e0-7e676820eece\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-v8tfk" Jan 22 10:30:03 crc kubenswrapper[4752]: I0122 10:30:00.398729 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6cc1f620-50fd-495f-b9e0-7e676820eece-secret-volume\") pod \"collect-profiles-29484630-v8tfk\" (UID: \"6cc1f620-50fd-495f-b9e0-7e676820eece\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-v8tfk" Jan 22 10:30:03 crc kubenswrapper[4752]: I0122 10:30:00.398843 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6cc1f620-50fd-495f-b9e0-7e676820eece-config-volume\") pod \"collect-profiles-29484630-v8tfk\" (UID: \"6cc1f620-50fd-495f-b9e0-7e676820eece\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-v8tfk" Jan 22 10:30:03 crc kubenswrapper[4752]: I0122 10:30:00.398917 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7cgf\" (UniqueName: \"kubernetes.io/projected/6cc1f620-50fd-495f-b9e0-7e676820eece-kube-api-access-r7cgf\") pod \"collect-profiles-29484630-v8tfk\" (UID: \"6cc1f620-50fd-495f-b9e0-7e676820eece\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-v8tfk" Jan 22 10:30:03 crc kubenswrapper[4752]: I0122 10:30:00.400916 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6cc1f620-50fd-495f-b9e0-7e676820eece-config-volume\") pod \"collect-profiles-29484630-v8tfk\" (UID: \"6cc1f620-50fd-495f-b9e0-7e676820eece\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-v8tfk" Jan 22 10:30:03 crc kubenswrapper[4752]: I0122 10:30:00.411115 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6cc1f620-50fd-495f-b9e0-7e676820eece-secret-volume\") pod \"collect-profiles-29484630-v8tfk\" (UID: \"6cc1f620-50fd-495f-b9e0-7e676820eece\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-v8tfk" Jan 22 10:30:03 crc kubenswrapper[4752]: I0122 10:30:00.435934 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7cgf\" (UniqueName: \"kubernetes.io/projected/6cc1f620-50fd-495f-b9e0-7e676820eece-kube-api-access-r7cgf\") pod \"collect-profiles-29484630-v8tfk\" (UID: \"6cc1f620-50fd-495f-b9e0-7e676820eece\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-v8tfk" Jan 22 10:30:03 crc kubenswrapper[4752]: I0122 10:30:00.506376 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-v8tfk" Jan 22 10:30:03 crc kubenswrapper[4752]: I0122 10:30:01.878832 4752 patch_prober.go:28] interesting pod/controller-manager-6567888b4c-4s6wh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" start-of-body= Jan 22 10:30:03 crc kubenswrapper[4752]: I0122 10:30:01.879340 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6567888b4c-4s6wh" podUID="8f34c644-a730-4275-9509-dc72ce0b722c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.046475 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484630-v8tfk"] Jan 22 10:30:04 crc kubenswrapper[4752]: W0122 10:30:04.057159 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6cc1f620_50fd_495f_b9e0_7e676820eece.slice/crio-4f53ef1cb640b4b5dfaabdae6250d24f04a8940e9f018ede21006ad1248a2773 WatchSource:0}: Error finding container 4f53ef1cb640b4b5dfaabdae6250d24f04a8940e9f018ede21006ad1248a2773: Status 404 returned error can't find the container with id 4f53ef1cb640b4b5dfaabdae6250d24f04a8940e9f018ede21006ad1248a2773 Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.093509 4752 generic.go:334] "Generic (PLEG): container finished" podID="8f34c644-a730-4275-9509-dc72ce0b722c" containerID="3e29bd8fdf47624af4f590817f36e1267a23669fd2c4acb3e3df850202dc9506" exitCode=0 Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.093622 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6567888b4c-4s6wh" event={"ID":"8f34c644-a730-4275-9509-dc72ce0b722c","Type":"ContainerDied","Data":"3e29bd8fdf47624af4f590817f36e1267a23669fd2c4acb3e3df850202dc9506"} Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.532562 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6567888b4c-4s6wh" Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.563245 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-765455979b-4gz4d"] Jan 22 10:30:04 crc kubenswrapper[4752]: E0122 10:30:04.563499 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f34c644-a730-4275-9509-dc72ce0b722c" containerName="controller-manager" Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.563525 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f34c644-a730-4275-9509-dc72ce0b722c" containerName="controller-manager" Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.563652 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f34c644-a730-4275-9509-dc72ce0b722c" containerName="controller-manager" Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.564125 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-765455979b-4gz4d" Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.585266 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f34c644-a730-4275-9509-dc72ce0b722c-proxy-ca-bundles\") pod \"8f34c644-a730-4275-9509-dc72ce0b722c\" (UID: \"8f34c644-a730-4275-9509-dc72ce0b722c\") " Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.585328 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f34c644-a730-4275-9509-dc72ce0b722c-serving-cert\") pod \"8f34c644-a730-4275-9509-dc72ce0b722c\" (UID: \"8f34c644-a730-4275-9509-dc72ce0b722c\") " Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.585431 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvzqc\" (UniqueName: \"kubernetes.io/projected/8f34c644-a730-4275-9509-dc72ce0b722c-kube-api-access-lvzqc\") pod \"8f34c644-a730-4275-9509-dc72ce0b722c\" (UID: \"8f34c644-a730-4275-9509-dc72ce0b722c\") " Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.586729 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f34c644-a730-4275-9509-dc72ce0b722c-client-ca\") pod \"8f34c644-a730-4275-9509-dc72ce0b722c\" (UID: \"8f34c644-a730-4275-9509-dc72ce0b722c\") " Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.586591 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f34c644-a730-4275-9509-dc72ce0b722c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8f34c644-a730-4275-9509-dc72ce0b722c" (UID: "8f34c644-a730-4275-9509-dc72ce0b722c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.586805 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f34c644-a730-4275-9509-dc72ce0b722c-config\") pod \"8f34c644-a730-4275-9509-dc72ce0b722c\" (UID: \"8f34c644-a730-4275-9509-dc72ce0b722c\") " Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.587081 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8773131-0bc2-4970-bd66-a3a0d0d2a43a-client-ca\") pod \"controller-manager-765455979b-4gz4d\" (UID: \"b8773131-0bc2-4970-bd66-a3a0d0d2a43a\") " pod="openshift-controller-manager/controller-manager-765455979b-4gz4d" Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.587135 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b8773131-0bc2-4970-bd66-a3a0d0d2a43a-proxy-ca-bundles\") pod \"controller-manager-765455979b-4gz4d\" (UID: \"b8773131-0bc2-4970-bd66-a3a0d0d2a43a\") " pod="openshift-controller-manager/controller-manager-765455979b-4gz4d" Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.587177 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8773131-0bc2-4970-bd66-a3a0d0d2a43a-config\") pod \"controller-manager-765455979b-4gz4d\" (UID: \"b8773131-0bc2-4970-bd66-a3a0d0d2a43a\") " pod="openshift-controller-manager/controller-manager-765455979b-4gz4d" Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.587254 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8773131-0bc2-4970-bd66-a3a0d0d2a43a-serving-cert\") pod \"controller-manager-765455979b-4gz4d\" (UID: \"b8773131-0bc2-4970-bd66-a3a0d0d2a43a\") " pod="openshift-controller-manager/controller-manager-765455979b-4gz4d" Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.587293 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49f5m\" (UniqueName: \"kubernetes.io/projected/b8773131-0bc2-4970-bd66-a3a0d0d2a43a-kube-api-access-49f5m\") pod \"controller-manager-765455979b-4gz4d\" (UID: \"b8773131-0bc2-4970-bd66-a3a0d0d2a43a\") " pod="openshift-controller-manager/controller-manager-765455979b-4gz4d" Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.587344 4752 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f34c644-a730-4275-9509-dc72ce0b722c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.587739 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f34c644-a730-4275-9509-dc72ce0b722c-client-ca" (OuterVolumeSpecName: "client-ca") pod "8f34c644-a730-4275-9509-dc72ce0b722c" (UID: "8f34c644-a730-4275-9509-dc72ce0b722c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.588098 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f34c644-a730-4275-9509-dc72ce0b722c-config" (OuterVolumeSpecName: "config") pod "8f34c644-a730-4275-9509-dc72ce0b722c" (UID: "8f34c644-a730-4275-9509-dc72ce0b722c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.589613 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-765455979b-4gz4d"] Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.594056 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f34c644-a730-4275-9509-dc72ce0b722c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8f34c644-a730-4275-9509-dc72ce0b722c" (UID: "8f34c644-a730-4275-9509-dc72ce0b722c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.595076 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f34c644-a730-4275-9509-dc72ce0b722c-kube-api-access-lvzqc" (OuterVolumeSpecName: "kube-api-access-lvzqc") pod "8f34c644-a730-4275-9509-dc72ce0b722c" (UID: "8f34c644-a730-4275-9509-dc72ce0b722c"). InnerVolumeSpecName "kube-api-access-lvzqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.689355 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8773131-0bc2-4970-bd66-a3a0d0d2a43a-serving-cert\") pod \"controller-manager-765455979b-4gz4d\" (UID: \"b8773131-0bc2-4970-bd66-a3a0d0d2a43a\") " pod="openshift-controller-manager/controller-manager-765455979b-4gz4d" Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.690110 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49f5m\" (UniqueName: \"kubernetes.io/projected/b8773131-0bc2-4970-bd66-a3a0d0d2a43a-kube-api-access-49f5m\") pod \"controller-manager-765455979b-4gz4d\" (UID: \"b8773131-0bc2-4970-bd66-a3a0d0d2a43a\") " pod="openshift-controller-manager/controller-manager-765455979b-4gz4d" Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.690468 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8773131-0bc2-4970-bd66-a3a0d0d2a43a-client-ca\") pod \"controller-manager-765455979b-4gz4d\" (UID: \"b8773131-0bc2-4970-bd66-a3a0d0d2a43a\") " pod="openshift-controller-manager/controller-manager-765455979b-4gz4d" Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.690620 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b8773131-0bc2-4970-bd66-a3a0d0d2a43a-proxy-ca-bundles\") pod \"controller-manager-765455979b-4gz4d\" (UID: \"b8773131-0bc2-4970-bd66-a3a0d0d2a43a\") " pod="openshift-controller-manager/controller-manager-765455979b-4gz4d" Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.690934 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8773131-0bc2-4970-bd66-a3a0d0d2a43a-config\") pod \"controller-manager-765455979b-4gz4d\" (UID: \"b8773131-0bc2-4970-bd66-a3a0d0d2a43a\") " pod="openshift-controller-manager/controller-manager-765455979b-4gz4d" Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.691115 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f34c644-a730-4275-9509-dc72ce0b722c-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.691209 4752 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f34c644-a730-4275-9509-dc72ce0b722c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.691300 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvzqc\" (UniqueName: \"kubernetes.io/projected/8f34c644-a730-4275-9509-dc72ce0b722c-kube-api-access-lvzqc\") on node \"crc\" DevicePath \"\"" Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.691411 4752 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f34c644-a730-4275-9509-dc72ce0b722c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.691773 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8773131-0bc2-4970-bd66-a3a0d0d2a43a-client-ca\") pod \"controller-manager-765455979b-4gz4d\" (UID: \"b8773131-0bc2-4970-bd66-a3a0d0d2a43a\") " pod="openshift-controller-manager/controller-manager-765455979b-4gz4d" Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.692508 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b8773131-0bc2-4970-bd66-a3a0d0d2a43a-proxy-ca-bundles\") pod \"controller-manager-765455979b-4gz4d\" (UID: \"b8773131-0bc2-4970-bd66-a3a0d0d2a43a\") " pod="openshift-controller-manager/controller-manager-765455979b-4gz4d" Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.694463 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8773131-0bc2-4970-bd66-a3a0d0d2a43a-serving-cert\") pod \"controller-manager-765455979b-4gz4d\" (UID: \"b8773131-0bc2-4970-bd66-a3a0d0d2a43a\") " pod="openshift-controller-manager/controller-manager-765455979b-4gz4d" Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.696683 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8773131-0bc2-4970-bd66-a3a0d0d2a43a-config\") pod \"controller-manager-765455979b-4gz4d\" (UID: \"b8773131-0bc2-4970-bd66-a3a0d0d2a43a\") " pod="openshift-controller-manager/controller-manager-765455979b-4gz4d" Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.708370 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49f5m\" (UniqueName: \"kubernetes.io/projected/b8773131-0bc2-4970-bd66-a3a0d0d2a43a-kube-api-access-49f5m\") pod \"controller-manager-765455979b-4gz4d\" (UID: \"b8773131-0bc2-4970-bd66-a3a0d0d2a43a\") " pod="openshift-controller-manager/controller-manager-765455979b-4gz4d" Jan 22 10:30:04 crc kubenswrapper[4752]: I0122 10:30:04.895385 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-765455979b-4gz4d" Jan 22 10:30:05 crc kubenswrapper[4752]: I0122 10:30:05.103167 4752 generic.go:334] "Generic (PLEG): container finished" podID="6cc1f620-50fd-495f-b9e0-7e676820eece" containerID="fdf7aa8f30242e2ab0b897febf79c68e7d84447c50caa63138a01f75ba6f3a6d" exitCode=0 Jan 22 10:30:05 crc kubenswrapper[4752]: I0122 10:30:05.105086 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-v8tfk" event={"ID":"6cc1f620-50fd-495f-b9e0-7e676820eece","Type":"ContainerDied","Data":"fdf7aa8f30242e2ab0b897febf79c68e7d84447c50caa63138a01f75ba6f3a6d"} Jan 22 10:30:05 crc kubenswrapper[4752]: I0122 10:30:05.105124 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-v8tfk" event={"ID":"6cc1f620-50fd-495f-b9e0-7e676820eece","Type":"ContainerStarted","Data":"4f53ef1cb640b4b5dfaabdae6250d24f04a8940e9f018ede21006ad1248a2773"} Jan 22 10:30:05 crc kubenswrapper[4752]: I0122 10:30:05.105166 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6567888b4c-4s6wh" event={"ID":"8f34c644-a730-4275-9509-dc72ce0b722c","Type":"ContainerDied","Data":"562edf1270aeec98fe9c00f8667c4e06553ff340c9a30471ea410ba12a171343"} Jan 22 10:30:05 crc kubenswrapper[4752]: I0122 10:30:05.105215 4752 scope.go:117] "RemoveContainer" containerID="3e29bd8fdf47624af4f590817f36e1267a23669fd2c4acb3e3df850202dc9506" Jan 22 10:30:05 crc kubenswrapper[4752]: I0122 10:30:05.105398 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6567888b4c-4s6wh" Jan 22 10:30:05 crc kubenswrapper[4752]: I0122 10:30:05.152733 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6567888b4c-4s6wh"] Jan 22 10:30:05 crc kubenswrapper[4752]: I0122 10:30:05.160250 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6567888b4c-4s6wh"] Jan 22 10:30:05 crc kubenswrapper[4752]: I0122 10:30:05.167022 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-765455979b-4gz4d"] Jan 22 10:30:06 crc kubenswrapper[4752]: I0122 10:30:06.115408 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-765455979b-4gz4d" event={"ID":"b8773131-0bc2-4970-bd66-a3a0d0d2a43a","Type":"ContainerStarted","Data":"d4367aa2473244f403fdf0b82c97781d77fcd0798c609d1aba12ba46c7e40ed5"} Jan 22 10:30:06 crc kubenswrapper[4752]: I0122 10:30:06.115478 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-765455979b-4gz4d" event={"ID":"b8773131-0bc2-4970-bd66-a3a0d0d2a43a","Type":"ContainerStarted","Data":"88888f756486c0ec36874ddd6df9ad950b1fc4046e3c5cb7f1422c845469399e"} Jan 22 10:30:06 crc kubenswrapper[4752]: I0122 10:30:06.140584 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-765455979b-4gz4d" podStartSLOduration=6.140559924 podStartE2EDuration="6.140559924s" podCreationTimestamp="2026-01-22 10:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:30:06.137956351 +0000 UTC m=+285.367899289" watchObservedRunningTime="2026-01-22 10:30:06.140559924 +0000 UTC m=+285.370502842" Jan 22 10:30:06 crc kubenswrapper[4752]: I0122 10:30:06.380090 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-v8tfk" Jan 22 10:30:06 crc kubenswrapper[4752]: I0122 10:30:06.414847 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7cgf\" (UniqueName: \"kubernetes.io/projected/6cc1f620-50fd-495f-b9e0-7e676820eece-kube-api-access-r7cgf\") pod \"6cc1f620-50fd-495f-b9e0-7e676820eece\" (UID: \"6cc1f620-50fd-495f-b9e0-7e676820eece\") " Jan 22 10:30:06 crc kubenswrapper[4752]: I0122 10:30:06.414933 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6cc1f620-50fd-495f-b9e0-7e676820eece-config-volume\") pod \"6cc1f620-50fd-495f-b9e0-7e676820eece\" (UID: \"6cc1f620-50fd-495f-b9e0-7e676820eece\") " Jan 22 10:30:06 crc kubenswrapper[4752]: I0122 10:30:06.414982 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6cc1f620-50fd-495f-b9e0-7e676820eece-secret-volume\") pod \"6cc1f620-50fd-495f-b9e0-7e676820eece\" (UID: \"6cc1f620-50fd-495f-b9e0-7e676820eece\") " Jan 22 10:30:06 crc kubenswrapper[4752]: I0122 10:30:06.415643 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc1f620-50fd-495f-b9e0-7e676820eece-config-volume" (OuterVolumeSpecName: "config-volume") pod "6cc1f620-50fd-495f-b9e0-7e676820eece" (UID: "6cc1f620-50fd-495f-b9e0-7e676820eece"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:30:06 crc kubenswrapper[4752]: I0122 10:30:06.424415 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cc1f620-50fd-495f-b9e0-7e676820eece-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6cc1f620-50fd-495f-b9e0-7e676820eece" (UID: "6cc1f620-50fd-495f-b9e0-7e676820eece"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:30:06 crc kubenswrapper[4752]: I0122 10:30:06.424496 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cc1f620-50fd-495f-b9e0-7e676820eece-kube-api-access-r7cgf" (OuterVolumeSpecName: "kube-api-access-r7cgf") pod "6cc1f620-50fd-495f-b9e0-7e676820eece" (UID: "6cc1f620-50fd-495f-b9e0-7e676820eece"). InnerVolumeSpecName "kube-api-access-r7cgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:30:06 crc kubenswrapper[4752]: I0122 10:30:06.516554 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7cgf\" (UniqueName: \"kubernetes.io/projected/6cc1f620-50fd-495f-b9e0-7e676820eece-kube-api-access-r7cgf\") on node \"crc\" DevicePath \"\"" Jan 22 10:30:06 crc kubenswrapper[4752]: I0122 10:30:06.516615 4752 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6cc1f620-50fd-495f-b9e0-7e676820eece-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 10:30:06 crc kubenswrapper[4752]: I0122 10:30:06.516630 4752 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6cc1f620-50fd-495f-b9e0-7e676820eece-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 10:30:07 crc kubenswrapper[4752]: I0122 10:30:07.104822 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f34c644-a730-4275-9509-dc72ce0b722c" path="/var/lib/kubelet/pods/8f34c644-a730-4275-9509-dc72ce0b722c/volumes" Jan 22 10:30:07 crc kubenswrapper[4752]: I0122 10:30:07.123845 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-v8tfk" event={"ID":"6cc1f620-50fd-495f-b9e0-7e676820eece","Type":"ContainerDied","Data":"4f53ef1cb640b4b5dfaabdae6250d24f04a8940e9f018ede21006ad1248a2773"} Jan 22 10:30:07 crc kubenswrapper[4752]: I0122 10:30:07.123911 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f53ef1cb640b4b5dfaabdae6250d24f04a8940e9f018ede21006ad1248a2773" Jan 22 10:30:07 crc kubenswrapper[4752]: I0122 10:30:07.124108 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-765455979b-4gz4d" Jan 22 10:30:07 crc kubenswrapper[4752]: I0122 10:30:07.127646 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484630-v8tfk" Jan 22 10:30:07 crc kubenswrapper[4752]: I0122 10:30:07.131548 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-765455979b-4gz4d" Jan 22 10:30:20 crc kubenswrapper[4752]: I0122 10:30:20.948791 4752 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 22 10:31:27 crc kubenswrapper[4752]: I0122 10:31:27.723954 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:31:27 crc kubenswrapper[4752]: I0122 10:31:27.726068 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:31:57 crc kubenswrapper[4752]: I0122 10:31:57.723870 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:31:57 crc kubenswrapper[4752]: I0122 10:31:57.725950 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:32:27 crc kubenswrapper[4752]: I0122 10:32:27.723665 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:32:27 crc kubenswrapper[4752]: I0122 10:32:27.724334 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:32:27 crc kubenswrapper[4752]: I0122 10:32:27.724423 4752 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 10:32:27 crc kubenswrapper[4752]: I0122 10:32:27.725350 4752 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"66d8fd85af8a62cbf6d844a6a3cd419c43895f7d9d194b9b69dabd0d0f78951a"} pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 10:32:27 crc kubenswrapper[4752]: I0122 10:32:27.725475 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" containerID="cri-o://66d8fd85af8a62cbf6d844a6a3cd419c43895f7d9d194b9b69dabd0d0f78951a" gracePeriod=600 Jan 22 10:32:28 crc kubenswrapper[4752]: I0122 10:32:28.066596 4752 generic.go:334] "Generic (PLEG): container finished" podID="eb8df70c-9474-4827-8831-f39fc6883d79" containerID="66d8fd85af8a62cbf6d844a6a3cd419c43895f7d9d194b9b69dabd0d0f78951a" exitCode=0 Jan 22 10:32:28 crc kubenswrapper[4752]: I0122 10:32:28.066836 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerDied","Data":"66d8fd85af8a62cbf6d844a6a3cd419c43895f7d9d194b9b69dabd0d0f78951a"} Jan 22 10:32:28 crc kubenswrapper[4752]: I0122 10:32:28.066978 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerStarted","Data":"65d7aaf92a1adc89263932ce8b3a2116ed56843d8468de5cdaef20db861a025e"} Jan 22 10:32:28 crc kubenswrapper[4752]: I0122 10:32:28.066996 4752 scope.go:117] "RemoveContainer" containerID="d454c628f773375238c677575b36049b845768c2aea0dadc4cb921c798aa21f4" Jan 22 10:34:51 crc kubenswrapper[4752]: I0122 10:34:51.063456 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-9m2wd"] Jan 22 10:34:51 crc kubenswrapper[4752]: E0122 10:34:51.064744 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cc1f620-50fd-495f-b9e0-7e676820eece" containerName="collect-profiles" Jan 22 10:34:51 crc kubenswrapper[4752]: I0122 10:34:51.064762 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cc1f620-50fd-495f-b9e0-7e676820eece" containerName="collect-profiles" Jan 22 10:34:51 crc kubenswrapper[4752]: I0122 10:34:51.064936 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cc1f620-50fd-495f-b9e0-7e676820eece" containerName="collect-profiles" Jan 22 10:34:51 crc kubenswrapper[4752]: I0122 10:34:51.065548 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9m2wd" Jan 22 10:34:51 crc kubenswrapper[4752]: I0122 10:34:51.069404 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 22 10:34:51 crc kubenswrapper[4752]: I0122 10:34:51.070067 4752 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-dssnx" Jan 22 10:34:51 crc kubenswrapper[4752]: I0122 10:34:51.070334 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 22 10:34:51 crc kubenswrapper[4752]: I0122 10:34:51.075263 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-9m2wd"] Jan 22 10:34:51 crc kubenswrapper[4752]: I0122 10:34:51.080789 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-h8j54"] Jan 22 10:34:51 crc kubenswrapper[4752]: I0122 10:34:51.081782 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-h8j54" Jan 22 10:34:51 crc kubenswrapper[4752]: I0122 10:34:51.085741 4752 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-rnrbh" Jan 22 10:34:51 crc kubenswrapper[4752]: I0122 10:34:51.107676 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-h8j54"] Jan 22 10:34:51 crc kubenswrapper[4752]: I0122 10:34:51.120416 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-tkm89"] Jan 22 10:34:51 crc kubenswrapper[4752]: I0122 10:34:51.121485 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-tkm89" Jan 22 10:34:51 crc kubenswrapper[4752]: I0122 10:34:51.128567 4752 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-dp7r2" Jan 22 10:34:51 crc kubenswrapper[4752]: I0122 10:34:51.132419 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-tkm89"] Jan 22 10:34:51 crc kubenswrapper[4752]: I0122 10:34:51.173680 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2skd2\" (UniqueName: \"kubernetes.io/projected/8b7bd449-d569-430d-9a96-bc31b78bc35c-kube-api-access-2skd2\") pod \"cert-manager-webhook-687f57d79b-tkm89\" (UID: \"8b7bd449-d569-430d-9a96-bc31b78bc35c\") " pod="cert-manager/cert-manager-webhook-687f57d79b-tkm89" Jan 22 10:34:51 crc kubenswrapper[4752]: I0122 10:34:51.173743 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kql79\" (UniqueName: \"kubernetes.io/projected/eaf0c26f-7e81-4a29-8fb2-93ebc20af49a-kube-api-access-kql79\") pod \"cert-manager-cainjector-cf98fcc89-9m2wd\" (UID: \"eaf0c26f-7e81-4a29-8fb2-93ebc20af49a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-9m2wd" Jan 22 10:34:51 crc kubenswrapper[4752]: I0122 10:34:51.173763 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl6kg\" (UniqueName: \"kubernetes.io/projected/a7c2d1ac-46f4-4703-ba8d-4ae095424108-kube-api-access-dl6kg\") pod \"cert-manager-858654f9db-h8j54\" (UID: \"a7c2d1ac-46f4-4703-ba8d-4ae095424108\") " pod="cert-manager/cert-manager-858654f9db-h8j54" Jan 22 10:34:51 crc kubenswrapper[4752]: I0122 10:34:51.274551 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2skd2\" (UniqueName: \"kubernetes.io/projected/8b7bd449-d569-430d-9a96-bc31b78bc35c-kube-api-access-2skd2\") pod \"cert-manager-webhook-687f57d79b-tkm89\" (UID: \"8b7bd449-d569-430d-9a96-bc31b78bc35c\") " pod="cert-manager/cert-manager-webhook-687f57d79b-tkm89" Jan 22 10:34:51 crc kubenswrapper[4752]: I0122 10:34:51.274604 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kql79\" (UniqueName: \"kubernetes.io/projected/eaf0c26f-7e81-4a29-8fb2-93ebc20af49a-kube-api-access-kql79\") pod \"cert-manager-cainjector-cf98fcc89-9m2wd\" (UID: \"eaf0c26f-7e81-4a29-8fb2-93ebc20af49a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-9m2wd" Jan 22 10:34:51 crc kubenswrapper[4752]: I0122 10:34:51.274626 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl6kg\" (UniqueName: \"kubernetes.io/projected/a7c2d1ac-46f4-4703-ba8d-4ae095424108-kube-api-access-dl6kg\") pod \"cert-manager-858654f9db-h8j54\" (UID: \"a7c2d1ac-46f4-4703-ba8d-4ae095424108\") " pod="cert-manager/cert-manager-858654f9db-h8j54" Jan 22 10:34:51 crc kubenswrapper[4752]: I0122 10:34:51.298764 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl6kg\" (UniqueName: \"kubernetes.io/projected/a7c2d1ac-46f4-4703-ba8d-4ae095424108-kube-api-access-dl6kg\") pod \"cert-manager-858654f9db-h8j54\" (UID: \"a7c2d1ac-46f4-4703-ba8d-4ae095424108\") " pod="cert-manager/cert-manager-858654f9db-h8j54" Jan 22 10:34:51 crc kubenswrapper[4752]: I0122 10:34:51.298764 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kql79\" (UniqueName: \"kubernetes.io/projected/eaf0c26f-7e81-4a29-8fb2-93ebc20af49a-kube-api-access-kql79\") pod \"cert-manager-cainjector-cf98fcc89-9m2wd\" (UID: \"eaf0c26f-7e81-4a29-8fb2-93ebc20af49a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-9m2wd" Jan 22 10:34:51 crc kubenswrapper[4752]: I0122 10:34:51.299393 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2skd2\" (UniqueName: \"kubernetes.io/projected/8b7bd449-d569-430d-9a96-bc31b78bc35c-kube-api-access-2skd2\") pod \"cert-manager-webhook-687f57d79b-tkm89\" (UID: \"8b7bd449-d569-430d-9a96-bc31b78bc35c\") " pod="cert-manager/cert-manager-webhook-687f57d79b-tkm89" Jan 22 10:34:51 crc kubenswrapper[4752]: I0122 10:34:51.387588 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9m2wd" Jan 22 10:34:51 crc kubenswrapper[4752]: I0122 10:34:51.400481 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-h8j54" Jan 22 10:34:51 crc kubenswrapper[4752]: I0122 10:34:51.437798 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-tkm89" Jan 22 10:34:51 crc kubenswrapper[4752]: I0122 10:34:51.709218 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-tkm89"] Jan 22 10:34:51 crc kubenswrapper[4752]: I0122 10:34:51.719710 4752 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 10:34:51 crc kubenswrapper[4752]: I0122 10:34:51.837151 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-h8j54"] Jan 22 10:34:51 crc kubenswrapper[4752]: W0122 10:34:51.844358 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7c2d1ac_46f4_4703_ba8d_4ae095424108.slice/crio-55f8db9ea503f4a243c3bfbcdf97910217c8b5bf56180ae44d2bf0a024a5530f WatchSource:0}: Error finding container 55f8db9ea503f4a243c3bfbcdf97910217c8b5bf56180ae44d2bf0a024a5530f: Status 404 returned error can't find the container with id 55f8db9ea503f4a243c3bfbcdf97910217c8b5bf56180ae44d2bf0a024a5530f Jan 22 10:34:51 crc kubenswrapper[4752]: I0122 10:34:51.850059 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-9m2wd"] Jan 22 10:34:52 crc kubenswrapper[4752]: I0122 10:34:52.008113 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9m2wd" event={"ID":"eaf0c26f-7e81-4a29-8fb2-93ebc20af49a","Type":"ContainerStarted","Data":"b1d571c717102f4835ef6437d16874d91fa1cb7fc25a6c33e50216619405347b"} Jan 22 10:34:52 crc kubenswrapper[4752]: I0122 10:34:52.010180 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-h8j54" event={"ID":"a7c2d1ac-46f4-4703-ba8d-4ae095424108","Type":"ContainerStarted","Data":"55f8db9ea503f4a243c3bfbcdf97910217c8b5bf56180ae44d2bf0a024a5530f"} Jan 22 10:34:52 crc kubenswrapper[4752]: I0122 10:34:52.011359 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-tkm89" event={"ID":"8b7bd449-d569-430d-9a96-bc31b78bc35c","Type":"ContainerStarted","Data":"20700b8e5f1b320e971b3f5dea1da39b667c734da57be9eb2e6810424004814a"} Jan 22 10:34:57 crc kubenswrapper[4752]: I0122 10:34:57.047631 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-h8j54" event={"ID":"a7c2d1ac-46f4-4703-ba8d-4ae095424108","Type":"ContainerStarted","Data":"76d082d1e534a434ac3380150398fc487eda839b4a1f2ed52462015ac45a7374"} Jan 22 10:34:57 crc kubenswrapper[4752]: I0122 10:34:57.049955 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-tkm89" event={"ID":"8b7bd449-d569-430d-9a96-bc31b78bc35c","Type":"ContainerStarted","Data":"e00cfa3694900aeddaaf561d07b28e08c6334bfdf5f5fe6f172301c3879f2049"} Jan 22 10:34:57 crc kubenswrapper[4752]: I0122 10:34:57.050065 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-tkm89" Jan 22 10:34:57 crc kubenswrapper[4752]: I0122 10:34:57.052450 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9m2wd" event={"ID":"eaf0c26f-7e81-4a29-8fb2-93ebc20af49a","Type":"ContainerStarted","Data":"bd4f0178796576dfb0359a2f8c82cfa699584b6876513dd004f5aac2f2de90c1"} Jan 22 10:34:57 crc kubenswrapper[4752]: I0122 10:34:57.094360 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9m2wd" podStartSLOduration=2.039571731 podStartE2EDuration="6.094331488s" podCreationTimestamp="2026-01-22 10:34:51 +0000 UTC" firstStartedPulling="2026-01-22 10:34:51.855347099 +0000 UTC m=+571.085290007" lastFinishedPulling="2026-01-22 10:34:55.910106856 +0000 UTC m=+575.140049764" observedRunningTime="2026-01-22 10:34:57.08627037 +0000 UTC m=+576.316213278" watchObservedRunningTime="2026-01-22 10:34:57.094331488 +0000 UTC m=+576.324274396" Jan 22 10:34:57 crc kubenswrapper[4752]: I0122 10:34:57.094770 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-h8j54" podStartSLOduration=2.032030966 podStartE2EDuration="6.094763409s" podCreationTimestamp="2026-01-22 10:34:51 +0000 UTC" firstStartedPulling="2026-01-22 10:34:51.847201499 +0000 UTC m=+571.077144407" lastFinishedPulling="2026-01-22 10:34:55.909933912 +0000 UTC m=+575.139876850" observedRunningTime="2026-01-22 10:34:57.071758156 +0000 UTC m=+576.301701084" watchObservedRunningTime="2026-01-22 10:34:57.094763409 +0000 UTC m=+576.324706317" Jan 22 10:34:57 crc kubenswrapper[4752]: I0122 10:34:57.115169 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-tkm89" podStartSLOduration=1.930426249 podStartE2EDuration="6.115146964s" podCreationTimestamp="2026-01-22 10:34:51 +0000 UTC" firstStartedPulling="2026-01-22 10:34:51.719364076 +0000 UTC m=+570.949306974" lastFinishedPulling="2026-01-22 10:34:55.904084781 +0000 UTC m=+575.134027689" observedRunningTime="2026-01-22 10:34:57.113404309 +0000 UTC m=+576.343347217" watchObservedRunningTime="2026-01-22 10:34:57.115146964 +0000 UTC m=+576.345089872" Jan 22 10:34:57 crc kubenswrapper[4752]: I0122 10:34:57.723819 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:34:57 crc kubenswrapper[4752]: I0122 10:34:57.723955 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:35:00 crc kubenswrapper[4752]: I0122 10:35:00.648209 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-784rk"] Jan 22 10:35:00 crc kubenswrapper[4752]: I0122 10:35:00.649250 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="ovn-controller" containerID="cri-o://4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416" gracePeriod=30 Jan 22 10:35:00 crc kubenswrapper[4752]: I0122 10:35:00.649284 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="nbdb" containerID="cri-o://fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12" gracePeriod=30 Jan 22 10:35:00 crc kubenswrapper[4752]: I0122 10:35:00.649387 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="sbdb" containerID="cri-o://1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864" gracePeriod=30 Jan 22 10:35:00 crc kubenswrapper[4752]: I0122 10:35:00.649433 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="kube-rbac-proxy-node" containerID="cri-o://4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616" gracePeriod=30 Jan 22 10:35:00 crc kubenswrapper[4752]: I0122 10:35:00.649421 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4" gracePeriod=30 Jan 22 10:35:00 crc kubenswrapper[4752]: I0122 10:35:00.649494 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="ovn-acl-logging" containerID="cri-o://b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed" gracePeriod=30 Jan 22 10:35:00 crc kubenswrapper[4752]: I0122 10:35:00.649474 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="northd" containerID="cri-o://8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b" gracePeriod=30 Jan 22 10:35:00 crc kubenswrapper[4752]: I0122 10:35:00.684843 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="ovnkube-controller" containerID="cri-o://27fe8ff40aeef358e5e6401f0eaa5dc400d4b43948373661f2a9ba1d45f0ea9e" gracePeriod=30 Jan 22 10:35:00 crc kubenswrapper[4752]: I0122 10:35:00.994346 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-784rk_bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25/ovnkube-controller/2.log" Jan 22 10:35:00 crc kubenswrapper[4752]: I0122 10:35:00.997243 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-784rk_bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25/ovn-acl-logging/0.log" Jan 22 10:35:00 crc kubenswrapper[4752]: I0122 10:35:00.997829 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-784rk_bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25/ovn-controller/0.log" Jan 22 10:35:00 crc kubenswrapper[4752]: I0122 10:35:00.998381 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.055828 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ql5qw"] Jan 22 10:35:01 crc kubenswrapper[4752]: E0122 10:35:01.056059 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="ovnkube-controller" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.056074 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="ovnkube-controller" Jan 22 10:35:01 crc kubenswrapper[4752]: E0122 10:35:01.056081 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="ovnkube-controller" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.056087 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="ovnkube-controller" Jan 22 10:35:01 crc kubenswrapper[4752]: E0122 10:35:01.056095 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="ovn-acl-logging" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.056101 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="ovn-acl-logging" Jan 22 10:35:01 crc kubenswrapper[4752]: E0122 10:35:01.056112 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.056120 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 10:35:01 crc kubenswrapper[4752]: E0122 10:35:01.056128 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="sbdb" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.056134 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="sbdb" Jan 22 10:35:01 crc kubenswrapper[4752]: E0122 10:35:01.056141 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="kubecfg-setup" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.056147 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="kubecfg-setup" Jan 22 10:35:01 crc kubenswrapper[4752]: E0122 10:35:01.056155 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="ovn-controller" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.056161 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="ovn-controller" Jan 22 10:35:01 crc kubenswrapper[4752]: E0122 10:35:01.056170 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="ovnkube-controller" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.056175 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="ovnkube-controller" Jan 22 10:35:01 crc kubenswrapper[4752]: E0122 10:35:01.056181 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="ovnkube-controller" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.056186 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="ovnkube-controller" Jan 22 10:35:01 crc kubenswrapper[4752]: E0122 10:35:01.056196 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="northd" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.056202 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="northd" Jan 22 10:35:01 crc kubenswrapper[4752]: E0122 10:35:01.056212 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="kube-rbac-proxy-node" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.056218 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="kube-rbac-proxy-node" Jan 22 10:35:01 crc kubenswrapper[4752]: E0122 10:35:01.056229 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="nbdb" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.056234 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="nbdb" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.056339 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="ovn-controller" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.056348 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="ovnkube-controller" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.056354 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.056363 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="ovn-acl-logging" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.056370 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="kube-rbac-proxy-node" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.056376 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="ovnkube-controller" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.056383 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="nbdb" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.056390 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="sbdb" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.056396 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="ovnkube-controller" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.056403 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="northd" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.056611 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerName="ovnkube-controller" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.058249 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.079022 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6nmbt_25322265-5a85-4c78-bf60-61836307404e/kube-multus/1.log" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.079554 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6nmbt_25322265-5a85-4c78-bf60-61836307404e/kube-multus/0.log" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.079615 4752 generic.go:334] "Generic (PLEG): container finished" podID="25322265-5a85-4c78-bf60-61836307404e" containerID="88e672fb91a91fc93be8f89f79772ed85c622395fbb531d323002e7240e518c4" exitCode=2 Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.079706 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6nmbt" event={"ID":"25322265-5a85-4c78-bf60-61836307404e","Type":"ContainerDied","Data":"88e672fb91a91fc93be8f89f79772ed85c622395fbb531d323002e7240e518c4"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.079772 4752 scope.go:117] "RemoveContainer" containerID="6ea96bd965ff956c6d61aca5bf53944946a5cfca4341715ef3cdb4c25b17811e" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.080209 4752 scope.go:117] "RemoveContainer" containerID="88e672fb91a91fc93be8f89f79772ed85c622395fbb531d323002e7240e518c4" Jan 22 10:35:01 crc kubenswrapper[4752]: E0122 10:35:01.080591 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-6nmbt_openshift-multus(25322265-5a85-4c78-bf60-61836307404e)\"" pod="openshift-multus/multus-6nmbt" podUID="25322265-5a85-4c78-bf60-61836307404e" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.083292 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-784rk_bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25/ovnkube-controller/2.log" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.086728 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-784rk_bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25/ovn-acl-logging/0.log" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.087346 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-784rk_bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25/ovn-controller/0.log" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.087786 4752 generic.go:334] "Generic (PLEG): container finished" podID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerID="27fe8ff40aeef358e5e6401f0eaa5dc400d4b43948373661f2a9ba1d45f0ea9e" exitCode=0 Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.087822 4752 generic.go:334] "Generic (PLEG): container finished" podID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerID="1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864" exitCode=0 Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.087833 4752 generic.go:334] "Generic (PLEG): container finished" podID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerID="fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12" exitCode=0 Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.087842 4752 generic.go:334] "Generic (PLEG): container finished" podID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerID="8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b" exitCode=0 Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.087870 4752 generic.go:334] "Generic (PLEG): container finished" podID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerID="8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4" exitCode=0 Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.087880 4752 generic.go:334] "Generic (PLEG): container finished" podID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerID="4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616" exitCode=0 Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.087889 4752 generic.go:334] "Generic (PLEG): container finished" podID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerID="b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed" exitCode=143 Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.087899 4752 generic.go:334] "Generic (PLEG): container finished" podID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" containerID="4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416" exitCode=143 Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.087889 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" event={"ID":"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25","Type":"ContainerDied","Data":"27fe8ff40aeef358e5e6401f0eaa5dc400d4b43948373661f2a9ba1d45f0ea9e"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.087950 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" event={"ID":"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25","Type":"ContainerDied","Data":"1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.087968 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" event={"ID":"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25","Type":"ContainerDied","Data":"fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.087976 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.087986 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" event={"ID":"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25","Type":"ContainerDied","Data":"8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088095 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" event={"ID":"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25","Type":"ContainerDied","Data":"8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088114 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" event={"ID":"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25","Type":"ContainerDied","Data":"4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088129 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"27fe8ff40aeef358e5e6401f0eaa5dc400d4b43948373661f2a9ba1d45f0ea9e"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088147 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088169 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088176 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088183 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088191 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088198 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088205 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088213 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088220 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088231 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" event={"ID":"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25","Type":"ContainerDied","Data":"b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088243 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"27fe8ff40aeef358e5e6401f0eaa5dc400d4b43948373661f2a9ba1d45f0ea9e"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088252 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088258 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088265 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088272 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088279 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088286 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088292 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088299 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088306 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088315 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" event={"ID":"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25","Type":"ContainerDied","Data":"4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088327 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"27fe8ff40aeef358e5e6401f0eaa5dc400d4b43948373661f2a9ba1d45f0ea9e"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088344 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088352 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088359 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088365 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088372 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088379 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088385 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088392 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088398 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088438 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-784rk" event={"ID":"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25","Type":"ContainerDied","Data":"a48cec0bb0d694184188ffe2af6c7bcac6910a68a0f4d7d7b96f7e40ade0e9f3"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088453 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"27fe8ff40aeef358e5e6401f0eaa5dc400d4b43948373661f2a9ba1d45f0ea9e"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088461 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088468 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088474 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088481 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088487 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088494 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088501 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088508 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.088514 4752 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941"} Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.115242 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-run-ovn-kubernetes\") pod \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.115277 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-cni-netd\") pod \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.115298 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-cni-bin\") pod \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.115320 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-node-log\") pod \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.115344 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-run-systemd\") pod \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.115372 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-env-overrides\") pod \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.115386 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-run-netns\") pod \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.115400 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-log-socket\") pod \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.115413 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-slash\") pod \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.115432 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-run-openvswitch\") pod \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.115446 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-etc-openvswitch\") pod \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.115472 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-var-lib-openvswitch\") pod \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.115503 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-ovnkube-script-lib\") pod \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.115521 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-run-ovn\") pod \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.115538 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6gs\" (UniqueName: \"kubernetes.io/projected/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-kube-api-access-sb6gs\") pod \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.115556 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-ovn-node-metrics-cert\") pod \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.115574 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-ovnkube-config\") pod \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.115590 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-kubelet\") pod \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.115616 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-systemd-units\") pod \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.115631 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-var-lib-cni-networks-ovn-kubernetes\") pod \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\" (UID: \"bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25\") " Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.115835 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" (UID: "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.115919 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" (UID: "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.115938 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" (UID: "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.115954 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" (UID: "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.115968 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" (UID: "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.116005 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" (UID: "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.116034 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-node-log" (OuterVolumeSpecName: "node-log") pod "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" (UID: "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.116105 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" (UID: "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.116133 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" (UID: "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.116224 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-log-socket" (OuterVolumeSpecName: "log-socket") pod "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" (UID: "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.116201 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" (UID: "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.116307 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-slash" (OuterVolumeSpecName: "host-slash") pod "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" (UID: "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.116339 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" (UID: "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.116371 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" (UID: "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.116415 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" (UID: "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.116678 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" (UID: "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.116733 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" (UID: "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.120444 4752 scope.go:117] "RemoveContainer" containerID="27fe8ff40aeef358e5e6401f0eaa5dc400d4b43948373661f2a9ba1d45f0ea9e" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.123582 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" (UID: "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.123816 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-kube-api-access-sb6gs" (OuterVolumeSpecName: "kube-api-access-sb6gs") pod "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" (UID: "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25"). InnerVolumeSpecName "kube-api-access-sb6gs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.152467 4752 scope.go:117] "RemoveContainer" containerID="9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.152800 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" (UID: "bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.172409 4752 scope.go:117] "RemoveContainer" containerID="1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.185127 4752 scope.go:117] "RemoveContainer" containerID="fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.201351 4752 scope.go:117] "RemoveContainer" containerID="8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.217457 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-host-kubelet\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.217508 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ec429a0e-f849-403f-bc16-e0c09c18b529-ovn-node-metrics-cert\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.217532 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ec429a0e-f849-403f-bc16-e0c09c18b529-env-overrides\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.217548 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-var-lib-openvswitch\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.217564 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-etc-openvswitch\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.217585 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ec429a0e-f849-403f-bc16-e0c09c18b529-ovnkube-config\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.217648 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-log-socket\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.217665 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-host-cni-netd\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.217702 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-systemd-units\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.217722 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-host-run-ovn-kubernetes\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.217745 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-run-systemd\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.217767 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.217785 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-host-slash\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.217804 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-run-openvswitch\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.217831 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zwch\" (UniqueName: \"kubernetes.io/projected/ec429a0e-f849-403f-bc16-e0c09c18b529-kube-api-access-9zwch\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.217847 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-host-run-netns\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.217889 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ec429a0e-f849-403f-bc16-e0c09c18b529-ovnkube-script-lib\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.217908 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-node-log\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.217945 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-host-cni-bin\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.217962 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-run-ovn\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.218000 4752 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.218013 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6gs\" (UniqueName: \"kubernetes.io/projected/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-kube-api-access-sb6gs\") on node \"crc\" DevicePath \"\"" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.218023 4752 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.218033 4752 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.218041 4752 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.218050 4752 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.218059 4752 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.218069 4752 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.218078 4752 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.218086 4752 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.218094 4752 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-node-log\") on node \"crc\" DevicePath \"\"" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.218102 4752 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.218111 4752 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.218120 4752 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.218128 4752 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-log-socket\") on node \"crc\" DevicePath \"\"" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.218139 4752 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-host-slash\") on node \"crc\" DevicePath \"\"" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.218380 4752 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.218395 4752 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.218405 4752 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.218415 4752 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.218932 4752 scope.go:117] "RemoveContainer" containerID="8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.234080 4752 scope.go:117] "RemoveContainer" containerID="4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.249956 4752 scope.go:117] "RemoveContainer" containerID="b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.264256 4752 scope.go:117] "RemoveContainer" containerID="4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.280342 4752 scope.go:117] "RemoveContainer" containerID="fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.300383 4752 scope.go:117] "RemoveContainer" containerID="27fe8ff40aeef358e5e6401f0eaa5dc400d4b43948373661f2a9ba1d45f0ea9e" Jan 22 10:35:01 crc kubenswrapper[4752]: E0122 10:35:01.301137 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27fe8ff40aeef358e5e6401f0eaa5dc400d4b43948373661f2a9ba1d45f0ea9e\": container with ID starting with 27fe8ff40aeef358e5e6401f0eaa5dc400d4b43948373661f2a9ba1d45f0ea9e not found: ID does not exist" containerID="27fe8ff40aeef358e5e6401f0eaa5dc400d4b43948373661f2a9ba1d45f0ea9e" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.301197 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27fe8ff40aeef358e5e6401f0eaa5dc400d4b43948373661f2a9ba1d45f0ea9e"} err="failed to get container status \"27fe8ff40aeef358e5e6401f0eaa5dc400d4b43948373661f2a9ba1d45f0ea9e\": rpc error: code = NotFound desc = could not find container \"27fe8ff40aeef358e5e6401f0eaa5dc400d4b43948373661f2a9ba1d45f0ea9e\": container with ID starting with 27fe8ff40aeef358e5e6401f0eaa5dc400d4b43948373661f2a9ba1d45f0ea9e not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.301234 4752 scope.go:117] "RemoveContainer" containerID="9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6" Jan 22 10:35:01 crc kubenswrapper[4752]: E0122 10:35:01.301867 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6\": container with ID starting with 9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6 not found: ID does not exist" containerID="9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.301904 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6"} err="failed to get container status \"9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6\": rpc error: code = NotFound desc = could not find container \"9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6\": container with ID starting with 9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6 not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.301929 4752 scope.go:117] "RemoveContainer" containerID="1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864" Jan 22 10:35:01 crc kubenswrapper[4752]: E0122 10:35:01.302397 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864\": container with ID starting with 1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864 not found: ID does not exist" containerID="1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.302467 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864"} err="failed to get container status \"1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864\": rpc error: code = NotFound desc = could not find container \"1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864\": container with ID starting with 1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864 not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.302505 4752 scope.go:117] "RemoveContainer" containerID="fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12" Jan 22 10:35:01 crc kubenswrapper[4752]: E0122 10:35:01.302830 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12\": container with ID starting with fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12 not found: ID does not exist" containerID="fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.302873 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12"} err="failed to get container status \"fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12\": rpc error: code = NotFound desc = could not find container \"fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12\": container with ID starting with fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12 not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.302893 4752 scope.go:117] "RemoveContainer" containerID="8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b" Jan 22 10:35:01 crc kubenswrapper[4752]: E0122 10:35:01.303248 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b\": container with ID starting with 8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b not found: ID does not exist" containerID="8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.303288 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b"} err="failed to get container status \"8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b\": rpc error: code = NotFound desc = could not find container \"8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b\": container with ID starting with 8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.303312 4752 scope.go:117] "RemoveContainer" containerID="8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4" Jan 22 10:35:01 crc kubenswrapper[4752]: E0122 10:35:01.303631 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4\": container with ID starting with 8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4 not found: ID does not exist" containerID="8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.303662 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4"} err="failed to get container status \"8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4\": rpc error: code = NotFound desc = could not find container \"8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4\": container with ID starting with 8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4 not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.303680 4752 scope.go:117] "RemoveContainer" containerID="4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616" Jan 22 10:35:01 crc kubenswrapper[4752]: E0122 10:35:01.304108 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616\": container with ID starting with 4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616 not found: ID does not exist" containerID="4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.304161 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616"} err="failed to get container status \"4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616\": rpc error: code = NotFound desc = could not find container \"4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616\": container with ID starting with 4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616 not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.304190 4752 scope.go:117] "RemoveContainer" containerID="b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed" Jan 22 10:35:01 crc kubenswrapper[4752]: E0122 10:35:01.304705 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed\": container with ID starting with b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed not found: ID does not exist" containerID="b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.304741 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed"} err="failed to get container status \"b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed\": rpc error: code = NotFound desc = could not find container \"b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed\": container with ID starting with b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.304765 4752 scope.go:117] "RemoveContainer" containerID="4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416" Jan 22 10:35:01 crc kubenswrapper[4752]: E0122 10:35:01.305148 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416\": container with ID starting with 4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416 not found: ID does not exist" containerID="4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.305189 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416"} err="failed to get container status \"4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416\": rpc error: code = NotFound desc = could not find container \"4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416\": container with ID starting with 4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416 not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.305204 4752 scope.go:117] "RemoveContainer" containerID="fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941" Jan 22 10:35:01 crc kubenswrapper[4752]: E0122 10:35:01.305639 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\": container with ID starting with fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941 not found: ID does not exist" containerID="fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.305675 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941"} err="failed to get container status \"fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\": rpc error: code = NotFound desc = could not find container \"fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\": container with ID starting with fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941 not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.305696 4752 scope.go:117] "RemoveContainer" containerID="27fe8ff40aeef358e5e6401f0eaa5dc400d4b43948373661f2a9ba1d45f0ea9e" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.306012 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27fe8ff40aeef358e5e6401f0eaa5dc400d4b43948373661f2a9ba1d45f0ea9e"} err="failed to get container status \"27fe8ff40aeef358e5e6401f0eaa5dc400d4b43948373661f2a9ba1d45f0ea9e\": rpc error: code = NotFound desc = could not find container \"27fe8ff40aeef358e5e6401f0eaa5dc400d4b43948373661f2a9ba1d45f0ea9e\": container with ID starting with 27fe8ff40aeef358e5e6401f0eaa5dc400d4b43948373661f2a9ba1d45f0ea9e not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.306032 4752 scope.go:117] "RemoveContainer" containerID="9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.306370 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6"} err="failed to get container status \"9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6\": rpc error: code = NotFound desc = could not find container \"9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6\": container with ID starting with 9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6 not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.306399 4752 scope.go:117] "RemoveContainer" containerID="1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.306746 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864"} err="failed to get container status \"1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864\": rpc error: code = NotFound desc = could not find container \"1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864\": container with ID starting with 1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864 not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.306785 4752 scope.go:117] "RemoveContainer" containerID="fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.307128 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12"} err="failed to get container status \"fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12\": rpc error: code = NotFound desc = could not find container \"fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12\": container with ID starting with fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12 not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.307154 4752 scope.go:117] "RemoveContainer" containerID="8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.307514 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b"} err="failed to get container status \"8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b\": rpc error: code = NotFound desc = could not find container \"8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b\": container with ID starting with 8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.307549 4752 scope.go:117] "RemoveContainer" containerID="8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.307989 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4"} err="failed to get container status \"8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4\": rpc error: code = NotFound desc = could not find container \"8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4\": container with ID starting with 8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4 not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.308007 4752 scope.go:117] "RemoveContainer" containerID="4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.309168 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616"} err="failed to get container status \"4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616\": rpc error: code = NotFound desc = could not find container \"4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616\": container with ID starting with 4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616 not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.309201 4752 scope.go:117] "RemoveContainer" containerID="b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.309550 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed"} err="failed to get container status \"b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed\": rpc error: code = NotFound desc = could not find container \"b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed\": container with ID starting with b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.309594 4752 scope.go:117] "RemoveContainer" containerID="4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.309921 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416"} err="failed to get container status \"4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416\": rpc error: code = NotFound desc = could not find container \"4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416\": container with ID starting with 4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416 not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.309945 4752 scope.go:117] "RemoveContainer" containerID="fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.310234 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941"} err="failed to get container status \"fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\": rpc error: code = NotFound desc = could not find container \"fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\": container with ID starting with fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941 not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.310263 4752 scope.go:117] "RemoveContainer" containerID="27fe8ff40aeef358e5e6401f0eaa5dc400d4b43948373661f2a9ba1d45f0ea9e" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.310613 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27fe8ff40aeef358e5e6401f0eaa5dc400d4b43948373661f2a9ba1d45f0ea9e"} err="failed to get container status \"27fe8ff40aeef358e5e6401f0eaa5dc400d4b43948373661f2a9ba1d45f0ea9e\": rpc error: code = NotFound desc = could not find container \"27fe8ff40aeef358e5e6401f0eaa5dc400d4b43948373661f2a9ba1d45f0ea9e\": container with ID starting with 27fe8ff40aeef358e5e6401f0eaa5dc400d4b43948373661f2a9ba1d45f0ea9e not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.310636 4752 scope.go:117] "RemoveContainer" containerID="9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.310996 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6"} err="failed to get container status \"9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6\": rpc error: code = NotFound desc = could not find container \"9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6\": container with ID starting with 9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6 not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.311027 4752 scope.go:117] "RemoveContainer" containerID="1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.311359 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864"} err="failed to get container status \"1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864\": rpc error: code = NotFound desc = could not find container \"1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864\": container with ID starting with 1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864 not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.311381 4752 scope.go:117] "RemoveContainer" containerID="fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.311694 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12"} err="failed to get container status \"fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12\": rpc error: code = NotFound desc = could not find container \"fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12\": container with ID starting with fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12 not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.311721 4752 scope.go:117] "RemoveContainer" containerID="8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.312161 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b"} err="failed to get container status \"8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b\": rpc error: code = NotFound desc = could not find container \"8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b\": container with ID starting with 8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.312188 4752 scope.go:117] "RemoveContainer" containerID="8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.312485 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4"} err="failed to get container status \"8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4\": rpc error: code = NotFound desc = could not find container \"8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4\": container with ID starting with 8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4 not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.312515 4752 scope.go:117] "RemoveContainer" containerID="4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.312793 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616"} err="failed to get container status \"4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616\": rpc error: code = NotFound desc = could not find container \"4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616\": container with ID starting with 4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616 not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.312825 4752 scope.go:117] "RemoveContainer" containerID="b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.313149 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed"} err="failed to get container status \"b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed\": rpc error: code = NotFound desc = could not find container \"b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed\": container with ID starting with b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.313179 4752 scope.go:117] "RemoveContainer" containerID="4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.313610 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416"} err="failed to get container status \"4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416\": rpc error: code = NotFound desc = could not find container \"4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416\": container with ID starting with 4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416 not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.313637 4752 scope.go:117] "RemoveContainer" containerID="fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.313907 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941"} err="failed to get container status \"fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\": rpc error: code = NotFound desc = could not find container \"fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\": container with ID starting with fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941 not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.313937 4752 scope.go:117] "RemoveContainer" containerID="27fe8ff40aeef358e5e6401f0eaa5dc400d4b43948373661f2a9ba1d45f0ea9e" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.314413 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27fe8ff40aeef358e5e6401f0eaa5dc400d4b43948373661f2a9ba1d45f0ea9e"} err="failed to get container status \"27fe8ff40aeef358e5e6401f0eaa5dc400d4b43948373661f2a9ba1d45f0ea9e\": rpc error: code = NotFound desc = could not find container \"27fe8ff40aeef358e5e6401f0eaa5dc400d4b43948373661f2a9ba1d45f0ea9e\": container with ID starting with 27fe8ff40aeef358e5e6401f0eaa5dc400d4b43948373661f2a9ba1d45f0ea9e not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.314438 4752 scope.go:117] "RemoveContainer" containerID="9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.314771 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6"} err="failed to get container status \"9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6\": rpc error: code = NotFound desc = could not find container \"9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6\": container with ID starting with 9785fdad447569e91d59e5c3af58605cbfc32f685d9c1b351495d75f66bf82c6 not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.314808 4752 scope.go:117] "RemoveContainer" containerID="1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.315083 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864"} err="failed to get container status \"1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864\": rpc error: code = NotFound desc = could not find container \"1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864\": container with ID starting with 1a97cdbc51c056bded321e2032845ff27fc1b82eca9dbe9867b3abfe8d27e864 not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.315111 4752 scope.go:117] "RemoveContainer" containerID="fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.315456 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12"} err="failed to get container status \"fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12\": rpc error: code = NotFound desc = could not find container \"fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12\": container with ID starting with fc05e49c368fceaa75f07acae60654071a11d513fbb3e9da20e31bc4fb160a12 not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.315479 4752 scope.go:117] "RemoveContainer" containerID="8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.315734 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b"} err="failed to get container status \"8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b\": rpc error: code = NotFound desc = could not find container \"8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b\": container with ID starting with 8e32b9c10d8a14c3bac2e2d3617c09cc381c628c63332f108cc5ac97da7a7c1b not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.315760 4752 scope.go:117] "RemoveContainer" containerID="8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.316018 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4"} err="failed to get container status \"8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4\": rpc error: code = NotFound desc = could not find container \"8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4\": container with ID starting with 8c2203c0de1ff70fdc1b9c971cf893f144bf5a6c3cb344cd1cbb2b545bed9ed4 not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.316041 4752 scope.go:117] "RemoveContainer" containerID="4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.316317 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616"} err="failed to get container status \"4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616\": rpc error: code = NotFound desc = could not find container \"4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616\": container with ID starting with 4c48731785c86d91b434be89704f6fa54c31dd722faa7d6322ad4e79d23c4616 not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.316339 4752 scope.go:117] "RemoveContainer" containerID="b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.317415 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed"} err="failed to get container status \"b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed\": rpc error: code = NotFound desc = could not find container \"b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed\": container with ID starting with b06591b1aaf890d10b0b8f7cf1fda559989f15a980e771e76e56fbc33a7179ed not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.317445 4752 scope.go:117] "RemoveContainer" containerID="4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.318057 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416"} err="failed to get container status \"4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416\": rpc error: code = NotFound desc = could not find container \"4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416\": container with ID starting with 4d277f4bde26b6a82a81004af3552405670685b8eb2daf4bcec9f2a3ff5a8416 not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.318088 4752 scope.go:117] "RemoveContainer" containerID="fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.318372 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941"} err="failed to get container status \"fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\": rpc error: code = NotFound desc = could not find container \"fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941\": container with ID starting with fcdabc858f57c2d84acc69db278a42ef0225b8a83e7a75304f9a790d7b16a941 not found: ID does not exist" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.319040 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-host-cni-bin\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.319096 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-run-ovn\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.319138 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-host-kubelet\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.319148 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-host-cni-bin\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.319177 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ec429a0e-f849-403f-bc16-e0c09c18b529-ovn-node-metrics-cert\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.319210 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ec429a0e-f849-403f-bc16-e0c09c18b529-env-overrides\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.319218 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-host-kubelet\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.319244 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-var-lib-openvswitch\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.319251 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-run-ovn\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.319275 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-etc-openvswitch\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.319316 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ec429a0e-f849-403f-bc16-e0c09c18b529-ovnkube-config\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.319341 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-var-lib-openvswitch\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.319360 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-log-socket\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.319396 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-etc-openvswitch\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.319422 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-log-socket\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.319439 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-host-cni-netd\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.319475 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-systemd-units\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.319480 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-host-cni-netd\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.319494 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-host-run-ovn-kubernetes\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.319524 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-run-systemd\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.319530 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-systemd-units\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.319551 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.319576 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-host-run-ovn-kubernetes\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.319607 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-run-systemd\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.319611 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-host-slash\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.319580 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-host-slash\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.319634 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.319689 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-run-openvswitch\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.319739 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zwch\" (UniqueName: \"kubernetes.io/projected/ec429a0e-f849-403f-bc16-e0c09c18b529-kube-api-access-9zwch\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.319772 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-host-run-netns\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.319811 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ec429a0e-f849-403f-bc16-e0c09c18b529-ovnkube-script-lib\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.319843 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-node-log\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.319919 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-host-run-netns\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.319968 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-node-log\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.320132 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec429a0e-f849-403f-bc16-e0c09c18b529-run-openvswitch\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.320285 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ec429a0e-f849-403f-bc16-e0c09c18b529-ovnkube-config\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.321365 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ec429a0e-f849-403f-bc16-e0c09c18b529-ovnkube-script-lib\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.321821 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ec429a0e-f849-403f-bc16-e0c09c18b529-env-overrides\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.323322 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ec429a0e-f849-403f-bc16-e0c09c18b529-ovn-node-metrics-cert\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.346491 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zwch\" (UniqueName: \"kubernetes.io/projected/ec429a0e-f849-403f-bc16-e0c09c18b529-kube-api-access-9zwch\") pod \"ovnkube-node-ql5qw\" (UID: \"ec429a0e-f849-403f-bc16-e0c09c18b529\") " pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.375263 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.431446 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-784rk"] Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.436212 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-784rk"] Jan 22 10:35:01 crc kubenswrapper[4752]: I0122 10:35:01.441940 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-tkm89" Jan 22 10:35:02 crc kubenswrapper[4752]: I0122 10:35:02.099144 4752 generic.go:334] "Generic (PLEG): container finished" podID="ec429a0e-f849-403f-bc16-e0c09c18b529" containerID="f67f748ae1a59eb5956038bfeda48b9e4034c679c0ae18f8483681fcdbf2aa46" exitCode=0 Jan 22 10:35:02 crc kubenswrapper[4752]: I0122 10:35:02.099225 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" event={"ID":"ec429a0e-f849-403f-bc16-e0c09c18b529","Type":"ContainerDied","Data":"f67f748ae1a59eb5956038bfeda48b9e4034c679c0ae18f8483681fcdbf2aa46"} Jan 22 10:35:02 crc kubenswrapper[4752]: I0122 10:35:02.099649 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" event={"ID":"ec429a0e-f849-403f-bc16-e0c09c18b529","Type":"ContainerStarted","Data":"608dbdee672283dc6b1a87982ad44aab5a300ba8f3f8e15bb60527af07e062bf"} Jan 22 10:35:02 crc kubenswrapper[4752]: I0122 10:35:02.104969 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6nmbt_25322265-5a85-4c78-bf60-61836307404e/kube-multus/1.log" Jan 22 10:35:03 crc kubenswrapper[4752]: I0122 10:35:03.105321 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25" path="/var/lib/kubelet/pods/bdaf9138-3ac1-4555-93c0-c8ddc3ef2c25/volumes" Jan 22 10:35:03 crc kubenswrapper[4752]: I0122 10:35:03.112461 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" event={"ID":"ec429a0e-f849-403f-bc16-e0c09c18b529","Type":"ContainerStarted","Data":"81b4600d1e102d4c0d328e7eb1889dffe0f47e3af3e222b694fd2c9ad6bffbd1"} Jan 22 10:35:03 crc kubenswrapper[4752]: I0122 10:35:03.112521 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" event={"ID":"ec429a0e-f849-403f-bc16-e0c09c18b529","Type":"ContainerStarted","Data":"21551d80ab1b0452f48559dbd62172efb66922cca50f95e639b76a02ef2e2d56"} Jan 22 10:35:03 crc kubenswrapper[4752]: I0122 10:35:03.112537 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" event={"ID":"ec429a0e-f849-403f-bc16-e0c09c18b529","Type":"ContainerStarted","Data":"4bc0217cf5917a883687dd98f3c7a61586dd8de0e891d4307ee65f5b656aa742"} Jan 22 10:35:03 crc kubenswrapper[4752]: I0122 10:35:03.112554 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" event={"ID":"ec429a0e-f849-403f-bc16-e0c09c18b529","Type":"ContainerStarted","Data":"d389d4bc9c4a67577a04f6e612d5d0a778f5009ad637f4997fc8b18d059bf7c1"} Jan 22 10:35:03 crc kubenswrapper[4752]: I0122 10:35:03.112569 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" event={"ID":"ec429a0e-f849-403f-bc16-e0c09c18b529","Type":"ContainerStarted","Data":"668f62088a4fd19f9bffb5e212cd136d687732ab81d076af92774e227332c097"} Jan 22 10:35:03 crc kubenswrapper[4752]: I0122 10:35:03.112580 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" event={"ID":"ec429a0e-f849-403f-bc16-e0c09c18b529","Type":"ContainerStarted","Data":"c7e43be4d7351e95e7db4b4f7200dbb99537eb96a202165472a8fce5f020ba80"} Jan 22 10:35:06 crc kubenswrapper[4752]: I0122 10:35:06.141689 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" event={"ID":"ec429a0e-f849-403f-bc16-e0c09c18b529","Type":"ContainerStarted","Data":"98de802f10b43fd7b15f0aaa73f9cc7ea574c3b5ae5f2bdc155ad2ad23fe6bf6"} Jan 22 10:35:08 crc kubenswrapper[4752]: I0122 10:35:08.155983 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" event={"ID":"ec429a0e-f849-403f-bc16-e0c09c18b529","Type":"ContainerStarted","Data":"40a70787f59199b47fce80f7783e145f268933d09557f38655d42622f823edfc"} Jan 22 10:35:08 crc kubenswrapper[4752]: I0122 10:35:08.156810 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:08 crc kubenswrapper[4752]: I0122 10:35:08.156838 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:08 crc kubenswrapper[4752]: I0122 10:35:08.156923 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:08 crc kubenswrapper[4752]: I0122 10:35:08.184865 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:08 crc kubenswrapper[4752]: I0122 10:35:08.190167 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:08 crc kubenswrapper[4752]: I0122 10:35:08.191237 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" podStartSLOduration=7.191222358 podStartE2EDuration="7.191222358s" podCreationTimestamp="2026-01-22 10:35:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:35:08.18856927 +0000 UTC m=+587.418512188" watchObservedRunningTime="2026-01-22 10:35:08.191222358 +0000 UTC m=+587.421165266" Jan 22 10:35:13 crc kubenswrapper[4752]: I0122 10:35:13.098122 4752 scope.go:117] "RemoveContainer" containerID="88e672fb91a91fc93be8f89f79772ed85c622395fbb531d323002e7240e518c4" Jan 22 10:35:14 crc kubenswrapper[4752]: I0122 10:35:14.197121 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6nmbt_25322265-5a85-4c78-bf60-61836307404e/kube-multus/1.log" Jan 22 10:35:14 crc kubenswrapper[4752]: I0122 10:35:14.197630 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6nmbt" event={"ID":"25322265-5a85-4c78-bf60-61836307404e","Type":"ContainerStarted","Data":"dd87a1465215fd4661c0dad142ecf812cd0f34945e93c8b801c9e3abd787bf6c"} Jan 22 10:35:27 crc kubenswrapper[4752]: I0122 10:35:27.724043 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:35:27 crc kubenswrapper[4752]: I0122 10:35:27.724764 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:35:31 crc kubenswrapper[4752]: I0122 10:35:31.400911 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fzf22"] Jan 22 10:35:31 crc kubenswrapper[4752]: I0122 10:35:31.415423 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fzf22" Jan 22 10:35:31 crc kubenswrapper[4752]: I0122 10:35:31.418689 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 22 10:35:31 crc kubenswrapper[4752]: I0122 10:35:31.420627 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ql5qw" Jan 22 10:35:31 crc kubenswrapper[4752]: I0122 10:35:31.430089 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fzf22"] Jan 22 10:35:31 crc kubenswrapper[4752]: I0122 10:35:31.578016 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3def5418-857a-4a2c-a4b1-ce57011e0ce3-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fzf22\" (UID: \"3def5418-857a-4a2c-a4b1-ce57011e0ce3\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fzf22" Jan 22 10:35:31 crc kubenswrapper[4752]: I0122 10:35:31.578204 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b54hv\" (UniqueName: \"kubernetes.io/projected/3def5418-857a-4a2c-a4b1-ce57011e0ce3-kube-api-access-b54hv\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fzf22\" (UID: \"3def5418-857a-4a2c-a4b1-ce57011e0ce3\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fzf22" Jan 22 10:35:31 crc kubenswrapper[4752]: I0122 10:35:31.578276 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3def5418-857a-4a2c-a4b1-ce57011e0ce3-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fzf22\" (UID: \"3def5418-857a-4a2c-a4b1-ce57011e0ce3\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fzf22" Jan 22 10:35:31 crc kubenswrapper[4752]: I0122 10:35:31.679169 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3def5418-857a-4a2c-a4b1-ce57011e0ce3-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fzf22\" (UID: \"3def5418-857a-4a2c-a4b1-ce57011e0ce3\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fzf22" Jan 22 10:35:31 crc kubenswrapper[4752]: I0122 10:35:31.679259 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3def5418-857a-4a2c-a4b1-ce57011e0ce3-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fzf22\" (UID: \"3def5418-857a-4a2c-a4b1-ce57011e0ce3\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fzf22" Jan 22 10:35:31 crc kubenswrapper[4752]: I0122 10:35:31.679365 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b54hv\" (UniqueName: \"kubernetes.io/projected/3def5418-857a-4a2c-a4b1-ce57011e0ce3-kube-api-access-b54hv\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fzf22\" (UID: \"3def5418-857a-4a2c-a4b1-ce57011e0ce3\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fzf22" Jan 22 10:35:31 crc kubenswrapper[4752]: I0122 10:35:31.679768 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3def5418-857a-4a2c-a4b1-ce57011e0ce3-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fzf22\" (UID: \"3def5418-857a-4a2c-a4b1-ce57011e0ce3\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fzf22" Jan 22 10:35:31 crc kubenswrapper[4752]: I0122 10:35:31.679912 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3def5418-857a-4a2c-a4b1-ce57011e0ce3-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fzf22\" (UID: \"3def5418-857a-4a2c-a4b1-ce57011e0ce3\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fzf22" Jan 22 10:35:31 crc kubenswrapper[4752]: I0122 10:35:31.699617 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b54hv\" (UniqueName: \"kubernetes.io/projected/3def5418-857a-4a2c-a4b1-ce57011e0ce3-kube-api-access-b54hv\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fzf22\" (UID: \"3def5418-857a-4a2c-a4b1-ce57011e0ce3\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fzf22" Jan 22 10:35:31 crc kubenswrapper[4752]: I0122 10:35:31.742651 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fzf22" Jan 22 10:35:31 crc kubenswrapper[4752]: I0122 10:35:31.919910 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fzf22"] Jan 22 10:35:31 crc kubenswrapper[4752]: W0122 10:35:31.927099 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3def5418_857a_4a2c_a4b1_ce57011e0ce3.slice/crio-0a0033e9a1d84f9980e531a9f490ce571e718217a4dd6b65f80fae45dcb6542c WatchSource:0}: Error finding container 0a0033e9a1d84f9980e531a9f490ce571e718217a4dd6b65f80fae45dcb6542c: Status 404 returned error can't find the container with id 0a0033e9a1d84f9980e531a9f490ce571e718217a4dd6b65f80fae45dcb6542c Jan 22 10:35:32 crc kubenswrapper[4752]: I0122 10:35:32.321050 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fzf22" event={"ID":"3def5418-857a-4a2c-a4b1-ce57011e0ce3","Type":"ContainerStarted","Data":"46f143defdffad0498cab65411621b78d2df3142305d5a11fcef9eae10052df5"} Jan 22 10:35:32 crc kubenswrapper[4752]: I0122 10:35:32.321116 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fzf22" event={"ID":"3def5418-857a-4a2c-a4b1-ce57011e0ce3","Type":"ContainerStarted","Data":"0a0033e9a1d84f9980e531a9f490ce571e718217a4dd6b65f80fae45dcb6542c"} Jan 22 10:35:33 crc kubenswrapper[4752]: I0122 10:35:33.329267 4752 generic.go:334] "Generic (PLEG): container finished" podID="3def5418-857a-4a2c-a4b1-ce57011e0ce3" containerID="46f143defdffad0498cab65411621b78d2df3142305d5a11fcef9eae10052df5" exitCode=0 Jan 22 10:35:33 crc kubenswrapper[4752]: I0122 10:35:33.329403 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fzf22" event={"ID":"3def5418-857a-4a2c-a4b1-ce57011e0ce3","Type":"ContainerDied","Data":"46f143defdffad0498cab65411621b78d2df3142305d5a11fcef9eae10052df5"} Jan 22 10:35:35 crc kubenswrapper[4752]: I0122 10:35:35.349569 4752 generic.go:334] "Generic (PLEG): container finished" podID="3def5418-857a-4a2c-a4b1-ce57011e0ce3" containerID="3e55bde3d27b2f40493cd38a4757b99b4387d1e839bfe7ffed8a41f7a6defd86" exitCode=0 Jan 22 10:35:35 crc kubenswrapper[4752]: I0122 10:35:35.349724 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fzf22" event={"ID":"3def5418-857a-4a2c-a4b1-ce57011e0ce3","Type":"ContainerDied","Data":"3e55bde3d27b2f40493cd38a4757b99b4387d1e839bfe7ffed8a41f7a6defd86"} Jan 22 10:35:36 crc kubenswrapper[4752]: I0122 10:35:36.398925 4752 generic.go:334] "Generic (PLEG): container finished" podID="3def5418-857a-4a2c-a4b1-ce57011e0ce3" containerID="dbbcd69d1b639889abf91cf6d291d16c44acc88cae78e0301e1f829ce09daf62" exitCode=0 Jan 22 10:35:36 crc kubenswrapper[4752]: I0122 10:35:36.399398 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fzf22" event={"ID":"3def5418-857a-4a2c-a4b1-ce57011e0ce3","Type":"ContainerDied","Data":"dbbcd69d1b639889abf91cf6d291d16c44acc88cae78e0301e1f829ce09daf62"} Jan 22 10:35:37 crc kubenswrapper[4752]: I0122 10:35:37.727421 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fzf22" Jan 22 10:35:37 crc kubenswrapper[4752]: I0122 10:35:37.861307 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b54hv\" (UniqueName: \"kubernetes.io/projected/3def5418-857a-4a2c-a4b1-ce57011e0ce3-kube-api-access-b54hv\") pod \"3def5418-857a-4a2c-a4b1-ce57011e0ce3\" (UID: \"3def5418-857a-4a2c-a4b1-ce57011e0ce3\") " Jan 22 10:35:37 crc kubenswrapper[4752]: I0122 10:35:37.861384 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3def5418-857a-4a2c-a4b1-ce57011e0ce3-bundle\") pod \"3def5418-857a-4a2c-a4b1-ce57011e0ce3\" (UID: \"3def5418-857a-4a2c-a4b1-ce57011e0ce3\") " Jan 22 10:35:37 crc kubenswrapper[4752]: I0122 10:35:37.861426 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3def5418-857a-4a2c-a4b1-ce57011e0ce3-util\") pod \"3def5418-857a-4a2c-a4b1-ce57011e0ce3\" (UID: \"3def5418-857a-4a2c-a4b1-ce57011e0ce3\") " Jan 22 10:35:37 crc kubenswrapper[4752]: I0122 10:35:37.863407 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3def5418-857a-4a2c-a4b1-ce57011e0ce3-bundle" (OuterVolumeSpecName: "bundle") pod "3def5418-857a-4a2c-a4b1-ce57011e0ce3" (UID: "3def5418-857a-4a2c-a4b1-ce57011e0ce3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:35:37 crc kubenswrapper[4752]: I0122 10:35:37.867086 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3def5418-857a-4a2c-a4b1-ce57011e0ce3-kube-api-access-b54hv" (OuterVolumeSpecName: "kube-api-access-b54hv") pod "3def5418-857a-4a2c-a4b1-ce57011e0ce3" (UID: "3def5418-857a-4a2c-a4b1-ce57011e0ce3"). InnerVolumeSpecName "kube-api-access-b54hv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:35:37 crc kubenswrapper[4752]: I0122 10:35:37.875360 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3def5418-857a-4a2c-a4b1-ce57011e0ce3-util" (OuterVolumeSpecName: "util") pod "3def5418-857a-4a2c-a4b1-ce57011e0ce3" (UID: "3def5418-857a-4a2c-a4b1-ce57011e0ce3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:35:37 crc kubenswrapper[4752]: I0122 10:35:37.962161 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b54hv\" (UniqueName: \"kubernetes.io/projected/3def5418-857a-4a2c-a4b1-ce57011e0ce3-kube-api-access-b54hv\") on node \"crc\" DevicePath \"\"" Jan 22 10:35:37 crc kubenswrapper[4752]: I0122 10:35:37.962209 4752 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3def5418-857a-4a2c-a4b1-ce57011e0ce3-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:35:37 crc kubenswrapper[4752]: I0122 10:35:37.962219 4752 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3def5418-857a-4a2c-a4b1-ce57011e0ce3-util\") on node \"crc\" DevicePath \"\"" Jan 22 10:35:38 crc kubenswrapper[4752]: I0122 10:35:38.415402 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fzf22" event={"ID":"3def5418-857a-4a2c-a4b1-ce57011e0ce3","Type":"ContainerDied","Data":"0a0033e9a1d84f9980e531a9f490ce571e718217a4dd6b65f80fae45dcb6542c"} Jan 22 10:35:38 crc kubenswrapper[4752]: I0122 10:35:38.415437 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fzf22" Jan 22 10:35:38 crc kubenswrapper[4752]: I0122 10:35:38.415445 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a0033e9a1d84f9980e531a9f490ce571e718217a4dd6b65f80fae45dcb6542c" Jan 22 10:35:49 crc kubenswrapper[4752]: I0122 10:35:49.830312 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-6g8xs"] Jan 22 10:35:49 crc kubenswrapper[4752]: E0122 10:35:49.831954 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3def5418-857a-4a2c-a4b1-ce57011e0ce3" containerName="util" Jan 22 10:35:49 crc kubenswrapper[4752]: I0122 10:35:49.832064 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="3def5418-857a-4a2c-a4b1-ce57011e0ce3" containerName="util" Jan 22 10:35:49 crc kubenswrapper[4752]: E0122 10:35:49.832143 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3def5418-857a-4a2c-a4b1-ce57011e0ce3" containerName="pull" Jan 22 10:35:49 crc kubenswrapper[4752]: I0122 10:35:49.832210 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="3def5418-857a-4a2c-a4b1-ce57011e0ce3" containerName="pull" Jan 22 10:35:49 crc kubenswrapper[4752]: E0122 10:35:49.832281 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3def5418-857a-4a2c-a4b1-ce57011e0ce3" containerName="extract" Jan 22 10:35:49 crc kubenswrapper[4752]: I0122 10:35:49.832346 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="3def5418-857a-4a2c-a4b1-ce57011e0ce3" containerName="extract" Jan 22 10:35:49 crc kubenswrapper[4752]: I0122 10:35:49.832533 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="3def5418-857a-4a2c-a4b1-ce57011e0ce3" containerName="extract" Jan 22 10:35:49 crc kubenswrapper[4752]: I0122 10:35:49.833128 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6g8xs" Jan 22 10:35:49 crc kubenswrapper[4752]: I0122 10:35:49.835553 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 22 10:35:49 crc kubenswrapper[4752]: I0122 10:35:49.836068 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-26xb8" Jan 22 10:35:49 crc kubenswrapper[4752]: I0122 10:35:49.837507 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 22 10:35:49 crc kubenswrapper[4752]: I0122 10:35:49.849193 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-6g8xs"] Jan 22 10:35:49 crc kubenswrapper[4752]: I0122 10:35:49.978802 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-577fbd5c75-tgkfd"] Jan 22 10:35:49 crc kubenswrapper[4752]: I0122 10:35:49.979657 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-577fbd5c75-tgkfd" Jan 22 10:35:49 crc kubenswrapper[4752]: I0122 10:35:49.982321 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 22 10:35:49 crc kubenswrapper[4752]: I0122 10:35:49.985785 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-2db9r" Jan 22 10:35:49 crc kubenswrapper[4752]: I0122 10:35:49.996045 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-577fbd5c75-tgkfd"] Jan 22 10:35:49 crc kubenswrapper[4752]: I0122 10:35:49.999257 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-577fbd5c75-hmwbt"] Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.000121 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-577fbd5c75-hmwbt" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.003183 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdxbk\" (UniqueName: \"kubernetes.io/projected/b37990f5-f8af-4e35-a38b-7abd0fe076a6-kube-api-access-hdxbk\") pod \"obo-prometheus-operator-68bc856cb9-6g8xs\" (UID: \"b37990f5-f8af-4e35-a38b-7abd0fe076a6\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6g8xs" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.032394 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-577fbd5c75-hmwbt"] Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.104456 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9ac77310-4d33-4bd8-af5a-2a28644e950a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-577fbd5c75-tgkfd\" (UID: \"9ac77310-4d33-4bd8-af5a-2a28644e950a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-577fbd5c75-tgkfd" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.104500 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9ac77310-4d33-4bd8-af5a-2a28644e950a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-577fbd5c75-tgkfd\" (UID: \"9ac77310-4d33-4bd8-af5a-2a28644e950a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-577fbd5c75-tgkfd" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.104547 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdxbk\" (UniqueName: \"kubernetes.io/projected/b37990f5-f8af-4e35-a38b-7abd0fe076a6-kube-api-access-hdxbk\") pod \"obo-prometheus-operator-68bc856cb9-6g8xs\" (UID: \"b37990f5-f8af-4e35-a38b-7abd0fe076a6\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6g8xs" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.104570 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/929db940-dde0-4ea6-9f4a-ec1c2af6efc0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-577fbd5c75-hmwbt\" (UID: \"929db940-dde0-4ea6-9f4a-ec1c2af6efc0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-577fbd5c75-hmwbt" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.104588 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/929db940-dde0-4ea6-9f4a-ec1c2af6efc0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-577fbd5c75-hmwbt\" (UID: \"929db940-dde0-4ea6-9f4a-ec1c2af6efc0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-577fbd5c75-hmwbt" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.125931 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdxbk\" (UniqueName: \"kubernetes.io/projected/b37990f5-f8af-4e35-a38b-7abd0fe076a6-kube-api-access-hdxbk\") pod \"obo-prometheus-operator-68bc856cb9-6g8xs\" (UID: \"b37990f5-f8af-4e35-a38b-7abd0fe076a6\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6g8xs" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.150394 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6g8xs" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.178026 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-fvt55"] Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.179018 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-fvt55" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.181509 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.181637 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-gff2b" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.194632 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-fvt55"] Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.206362 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv899\" (UniqueName: \"kubernetes.io/projected/8afa4c33-0f0a-44bc-a62e-d1161bcc7d1f-kube-api-access-lv899\") pod \"observability-operator-59bdc8b94-fvt55\" (UID: \"8afa4c33-0f0a-44bc-a62e-d1161bcc7d1f\") " pod="openshift-operators/observability-operator-59bdc8b94-fvt55" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.206451 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9ac77310-4d33-4bd8-af5a-2a28644e950a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-577fbd5c75-tgkfd\" (UID: \"9ac77310-4d33-4bd8-af5a-2a28644e950a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-577fbd5c75-tgkfd" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.206481 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9ac77310-4d33-4bd8-af5a-2a28644e950a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-577fbd5c75-tgkfd\" (UID: \"9ac77310-4d33-4bd8-af5a-2a28644e950a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-577fbd5c75-tgkfd" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.206530 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/8afa4c33-0f0a-44bc-a62e-d1161bcc7d1f-observability-operator-tls\") pod \"observability-operator-59bdc8b94-fvt55\" (UID: \"8afa4c33-0f0a-44bc-a62e-d1161bcc7d1f\") " pod="openshift-operators/observability-operator-59bdc8b94-fvt55" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.206571 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/929db940-dde0-4ea6-9f4a-ec1c2af6efc0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-577fbd5c75-hmwbt\" (UID: \"929db940-dde0-4ea6-9f4a-ec1c2af6efc0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-577fbd5c75-hmwbt" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.206593 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/929db940-dde0-4ea6-9f4a-ec1c2af6efc0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-577fbd5c75-hmwbt\" (UID: \"929db940-dde0-4ea6-9f4a-ec1c2af6efc0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-577fbd5c75-hmwbt" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.211319 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9ac77310-4d33-4bd8-af5a-2a28644e950a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-577fbd5c75-tgkfd\" (UID: \"9ac77310-4d33-4bd8-af5a-2a28644e950a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-577fbd5c75-tgkfd" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.212138 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/929db940-dde0-4ea6-9f4a-ec1c2af6efc0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-577fbd5c75-hmwbt\" (UID: \"929db940-dde0-4ea6-9f4a-ec1c2af6efc0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-577fbd5c75-hmwbt" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.214562 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9ac77310-4d33-4bd8-af5a-2a28644e950a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-577fbd5c75-tgkfd\" (UID: \"9ac77310-4d33-4bd8-af5a-2a28644e950a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-577fbd5c75-tgkfd" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.235726 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/929db940-dde0-4ea6-9f4a-ec1c2af6efc0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-577fbd5c75-hmwbt\" (UID: \"929db940-dde0-4ea6-9f4a-ec1c2af6efc0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-577fbd5c75-hmwbt" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.295646 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-577fbd5c75-tgkfd" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.309694 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv899\" (UniqueName: \"kubernetes.io/projected/8afa4c33-0f0a-44bc-a62e-d1161bcc7d1f-kube-api-access-lv899\") pod \"observability-operator-59bdc8b94-fvt55\" (UID: \"8afa4c33-0f0a-44bc-a62e-d1161bcc7d1f\") " pod="openshift-operators/observability-operator-59bdc8b94-fvt55" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.309765 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/8afa4c33-0f0a-44bc-a62e-d1161bcc7d1f-observability-operator-tls\") pod \"observability-operator-59bdc8b94-fvt55\" (UID: \"8afa4c33-0f0a-44bc-a62e-d1161bcc7d1f\") " pod="openshift-operators/observability-operator-59bdc8b94-fvt55" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.315143 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/8afa4c33-0f0a-44bc-a62e-d1161bcc7d1f-observability-operator-tls\") pod \"observability-operator-59bdc8b94-fvt55\" (UID: \"8afa4c33-0f0a-44bc-a62e-d1161bcc7d1f\") " pod="openshift-operators/observability-operator-59bdc8b94-fvt55" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.323126 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-577fbd5c75-hmwbt" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.342242 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv899\" (UniqueName: \"kubernetes.io/projected/8afa4c33-0f0a-44bc-a62e-d1161bcc7d1f-kube-api-access-lv899\") pod \"observability-operator-59bdc8b94-fvt55\" (UID: \"8afa4c33-0f0a-44bc-a62e-d1161bcc7d1f\") " pod="openshift-operators/observability-operator-59bdc8b94-fvt55" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.447325 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-bck65"] Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.452470 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-bck65" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.457648 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-7wwv4" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.460646 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-bck65"] Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.506827 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-6g8xs"] Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.584383 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-fvt55" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.617651 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mwt8\" (UniqueName: \"kubernetes.io/projected/a6bd47c7-e899-4f02-b199-b5b7e72f4734-kube-api-access-8mwt8\") pod \"perses-operator-5bf474d74f-bck65\" (UID: \"a6bd47c7-e899-4f02-b199-b5b7e72f4734\") " pod="openshift-operators/perses-operator-5bf474d74f-bck65" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.617690 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a6bd47c7-e899-4f02-b199-b5b7e72f4734-openshift-service-ca\") pod \"perses-operator-5bf474d74f-bck65\" (UID: \"a6bd47c7-e899-4f02-b199-b5b7e72f4734\") " pod="openshift-operators/perses-operator-5bf474d74f-bck65" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.718819 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mwt8\" (UniqueName: \"kubernetes.io/projected/a6bd47c7-e899-4f02-b199-b5b7e72f4734-kube-api-access-8mwt8\") pod \"perses-operator-5bf474d74f-bck65\" (UID: \"a6bd47c7-e899-4f02-b199-b5b7e72f4734\") " pod="openshift-operators/perses-operator-5bf474d74f-bck65" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.718878 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a6bd47c7-e899-4f02-b199-b5b7e72f4734-openshift-service-ca\") pod \"perses-operator-5bf474d74f-bck65\" (UID: \"a6bd47c7-e899-4f02-b199-b5b7e72f4734\") " pod="openshift-operators/perses-operator-5bf474d74f-bck65" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.719982 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a6bd47c7-e899-4f02-b199-b5b7e72f4734-openshift-service-ca\") pod \"perses-operator-5bf474d74f-bck65\" (UID: \"a6bd47c7-e899-4f02-b199-b5b7e72f4734\") " pod="openshift-operators/perses-operator-5bf474d74f-bck65" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.728314 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-577fbd5c75-hmwbt"] Jan 22 10:35:50 crc kubenswrapper[4752]: W0122 10:35:50.737367 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod929db940_dde0_4ea6_9f4a_ec1c2af6efc0.slice/crio-61376add4a034b3ea5c69bcc3b97ba84b70c7f6fd982f910681e510a451e31ef WatchSource:0}: Error finding container 61376add4a034b3ea5c69bcc3b97ba84b70c7f6fd982f910681e510a451e31ef: Status 404 returned error can't find the container with id 61376add4a034b3ea5c69bcc3b97ba84b70c7f6fd982f910681e510a451e31ef Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.743216 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mwt8\" (UniqueName: \"kubernetes.io/projected/a6bd47c7-e899-4f02-b199-b5b7e72f4734-kube-api-access-8mwt8\") pod \"perses-operator-5bf474d74f-bck65\" (UID: \"a6bd47c7-e899-4f02-b199-b5b7e72f4734\") " pod="openshift-operators/perses-operator-5bf474d74f-bck65" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.774682 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-bck65" Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.807799 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-577fbd5c75-tgkfd"] Jan 22 10:35:50 crc kubenswrapper[4752]: W0122 10:35:50.812737 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ac77310_4d33_4bd8_af5a_2a28644e950a.slice/crio-f616b26f9a5407c4a5f30c2d9d9afa41e940780286557541f0f67c8704139495 WatchSource:0}: Error finding container f616b26f9a5407c4a5f30c2d9d9afa41e940780286557541f0f67c8704139495: Status 404 returned error can't find the container with id f616b26f9a5407c4a5f30c2d9d9afa41e940780286557541f0f67c8704139495 Jan 22 10:35:50 crc kubenswrapper[4752]: I0122 10:35:50.853392 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-fvt55"] Jan 22 10:35:51 crc kubenswrapper[4752]: I0122 10:35:51.203887 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-bck65"] Jan 22 10:35:51 crc kubenswrapper[4752]: I0122 10:35:51.495615 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-577fbd5c75-tgkfd" event={"ID":"9ac77310-4d33-4bd8-af5a-2a28644e950a","Type":"ContainerStarted","Data":"f616b26f9a5407c4a5f30c2d9d9afa41e940780286557541f0f67c8704139495"} Jan 22 10:35:51 crc kubenswrapper[4752]: I0122 10:35:51.496447 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-fvt55" event={"ID":"8afa4c33-0f0a-44bc-a62e-d1161bcc7d1f","Type":"ContainerStarted","Data":"4a4de6083660c884d59b8a572ddd211bfd5a2f6cf381cb187356dcf5949bc61e"} Jan 22 10:35:51 crc kubenswrapper[4752]: I0122 10:35:51.497479 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6g8xs" event={"ID":"b37990f5-f8af-4e35-a38b-7abd0fe076a6","Type":"ContainerStarted","Data":"e8a5556245a07747f8442fe7c56a6e845ddc9db72b1ce0c44def7e1625df6ddb"} Jan 22 10:35:51 crc kubenswrapper[4752]: I0122 10:35:51.498394 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-bck65" event={"ID":"a6bd47c7-e899-4f02-b199-b5b7e72f4734","Type":"ContainerStarted","Data":"a94ebf8935f1c1c0597897f6cc99f7729a880c989114102cdff767468999b05f"} Jan 22 10:35:51 crc kubenswrapper[4752]: I0122 10:35:51.499202 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-577fbd5c75-hmwbt" event={"ID":"929db940-dde0-4ea6-9f4a-ec1c2af6efc0","Type":"ContainerStarted","Data":"61376add4a034b3ea5c69bcc3b97ba84b70c7f6fd982f910681e510a451e31ef"} Jan 22 10:35:57 crc kubenswrapper[4752]: I0122 10:35:57.546763 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-577fbd5c75-tgkfd" event={"ID":"9ac77310-4d33-4bd8-af5a-2a28644e950a","Type":"ContainerStarted","Data":"043f7f044353c258cea42800c7191febddd080c3ca3af3273668daf604485aeb"} Jan 22 10:35:57 crc kubenswrapper[4752]: I0122 10:35:57.617623 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-577fbd5c75-tgkfd" podStartSLOduration=2.153931828 podStartE2EDuration="8.617589812s" podCreationTimestamp="2026-01-22 10:35:49 +0000 UTC" firstStartedPulling="2026-01-22 10:35:50.815764069 +0000 UTC m=+630.045706977" lastFinishedPulling="2026-01-22 10:35:57.279422053 +0000 UTC m=+636.509364961" observedRunningTime="2026-01-22 10:35:57.613483796 +0000 UTC m=+636.843426704" watchObservedRunningTime="2026-01-22 10:35:57.617589812 +0000 UTC m=+636.847532710" Jan 22 10:35:57 crc kubenswrapper[4752]: I0122 10:35:57.723410 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:35:57 crc kubenswrapper[4752]: I0122 10:35:57.723504 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:35:57 crc kubenswrapper[4752]: I0122 10:35:57.723576 4752 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 10:35:57 crc kubenswrapper[4752]: I0122 10:35:57.724441 4752 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"65d7aaf92a1adc89263932ce8b3a2116ed56843d8468de5cdaef20db861a025e"} pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 10:35:57 crc kubenswrapper[4752]: I0122 10:35:57.724511 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" containerID="cri-o://65d7aaf92a1adc89263932ce8b3a2116ed56843d8468de5cdaef20db861a025e" gracePeriod=600 Jan 22 10:35:58 crc kubenswrapper[4752]: I0122 10:35:58.573600 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6g8xs" event={"ID":"b37990f5-f8af-4e35-a38b-7abd0fe076a6","Type":"ContainerStarted","Data":"bebb7dbf8aaa9bbfb1b1df3b67ee0f5f8b1f8e109b19ac404e87fa7565428e64"} Jan 22 10:35:58 crc kubenswrapper[4752]: I0122 10:35:58.579522 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-577fbd5c75-hmwbt" event={"ID":"929db940-dde0-4ea6-9f4a-ec1c2af6efc0","Type":"ContainerStarted","Data":"bfb769d78d187a568b6052b6a38014344b9632f285c0aa26e25c36f9d0102651"} Jan 22 10:35:58 crc kubenswrapper[4752]: I0122 10:35:58.585708 4752 generic.go:334] "Generic (PLEG): container finished" podID="eb8df70c-9474-4827-8831-f39fc6883d79" containerID="65d7aaf92a1adc89263932ce8b3a2116ed56843d8468de5cdaef20db861a025e" exitCode=0 Jan 22 10:35:58 crc kubenswrapper[4752]: I0122 10:35:58.585819 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerDied","Data":"65d7aaf92a1adc89263932ce8b3a2116ed56843d8468de5cdaef20db861a025e"} Jan 22 10:35:58 crc kubenswrapper[4752]: I0122 10:35:58.585863 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerStarted","Data":"042ed95c4b3ff840b51322a9e98a655ce91eb46ad1b15d6ef52fd539d78d7a7d"} Jan 22 10:35:58 crc kubenswrapper[4752]: I0122 10:35:58.585882 4752 scope.go:117] "RemoveContainer" containerID="66d8fd85af8a62cbf6d844a6a3cd419c43895f7d9d194b9b69dabd0d0f78951a" Jan 22 10:35:58 crc kubenswrapper[4752]: I0122 10:35:58.617299 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-577fbd5c75-hmwbt" podStartSLOduration=3.080284549 podStartE2EDuration="9.617274551s" podCreationTimestamp="2026-01-22 10:35:49 +0000 UTC" firstStartedPulling="2026-01-22 10:35:50.742447581 +0000 UTC m=+629.972390479" lastFinishedPulling="2026-01-22 10:35:57.279437573 +0000 UTC m=+636.509380481" observedRunningTime="2026-01-22 10:35:58.615778412 +0000 UTC m=+637.845721330" watchObservedRunningTime="2026-01-22 10:35:58.617274551 +0000 UTC m=+637.847217469" Jan 22 10:35:58 crc kubenswrapper[4752]: I0122 10:35:58.621368 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6g8xs" podStartSLOduration=2.899231555 podStartE2EDuration="9.621349866s" podCreationTimestamp="2026-01-22 10:35:49 +0000 UTC" firstStartedPulling="2026-01-22 10:35:50.538473997 +0000 UTC m=+629.768416905" lastFinishedPulling="2026-01-22 10:35:57.260592308 +0000 UTC m=+636.490535216" observedRunningTime="2026-01-22 10:35:58.593676263 +0000 UTC m=+637.823619181" watchObservedRunningTime="2026-01-22 10:35:58.621349866 +0000 UTC m=+637.851292804" Jan 22 10:36:00 crc kubenswrapper[4752]: I0122 10:36:00.599804 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-bck65" event={"ID":"a6bd47c7-e899-4f02-b199-b5b7e72f4734","Type":"ContainerStarted","Data":"f982406af28bbfb4abbf9cd78c88e19436f790d0251c603ef5c787a296df3b8d"} Jan 22 10:36:00 crc kubenswrapper[4752]: I0122 10:36:00.600438 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-bck65" Jan 22 10:36:00 crc kubenswrapper[4752]: I0122 10:36:00.619068 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-bck65" podStartSLOduration=2.005409361 podStartE2EDuration="10.619048731s" podCreationTimestamp="2026-01-22 10:35:50 +0000 UTC" firstStartedPulling="2026-01-22 10:35:51.217487026 +0000 UTC m=+630.447429944" lastFinishedPulling="2026-01-22 10:35:59.831126406 +0000 UTC m=+639.061069314" observedRunningTime="2026-01-22 10:36:00.614700409 +0000 UTC m=+639.844643317" watchObservedRunningTime="2026-01-22 10:36:00.619048731 +0000 UTC m=+639.848991639" Jan 22 10:36:04 crc kubenswrapper[4752]: I0122 10:36:04.627820 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-fvt55" event={"ID":"8afa4c33-0f0a-44bc-a62e-d1161bcc7d1f","Type":"ContainerStarted","Data":"19efeeec384a9e21494ce804345c38c56179b80cf35ad6f6fe6ec51d1a2ab880"} Jan 22 10:36:04 crc kubenswrapper[4752]: I0122 10:36:04.628286 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-fvt55" Jan 22 10:36:04 crc kubenswrapper[4752]: I0122 10:36:04.653520 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-fvt55" podStartSLOduration=1.6408470990000001 podStartE2EDuration="14.653499894s" podCreationTimestamp="2026-01-22 10:35:50 +0000 UTC" firstStartedPulling="2026-01-22 10:35:50.871010452 +0000 UTC m=+630.100953360" lastFinishedPulling="2026-01-22 10:36:03.883663237 +0000 UTC m=+643.113606155" observedRunningTime="2026-01-22 10:36:04.650478356 +0000 UTC m=+643.880421264" watchObservedRunningTime="2026-01-22 10:36:04.653499894 +0000 UTC m=+643.883442802" Jan 22 10:36:04 crc kubenswrapper[4752]: I0122 10:36:04.655265 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-fvt55" Jan 22 10:36:10 crc kubenswrapper[4752]: I0122 10:36:10.778760 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-bck65" Jan 22 10:36:28 crc kubenswrapper[4752]: I0122 10:36:28.684342 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lfkgt"] Jan 22 10:36:28 crc kubenswrapper[4752]: I0122 10:36:28.686285 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lfkgt" Jan 22 10:36:28 crc kubenswrapper[4752]: I0122 10:36:28.688370 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 22 10:36:28 crc kubenswrapper[4752]: I0122 10:36:28.705233 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lfkgt"] Jan 22 10:36:28 crc kubenswrapper[4752]: I0122 10:36:28.875089 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zb95\" (UniqueName: \"kubernetes.io/projected/be4ea031-db7a-49a8-a34b-46e2afef3c5e-kube-api-access-8zb95\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lfkgt\" (UID: \"be4ea031-db7a-49a8-a34b-46e2afef3c5e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lfkgt" Jan 22 10:36:28 crc kubenswrapper[4752]: I0122 10:36:28.875194 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/be4ea031-db7a-49a8-a34b-46e2afef3c5e-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lfkgt\" (UID: \"be4ea031-db7a-49a8-a34b-46e2afef3c5e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lfkgt" Jan 22 10:36:28 crc kubenswrapper[4752]: I0122 10:36:28.875230 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/be4ea031-db7a-49a8-a34b-46e2afef3c5e-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lfkgt\" (UID: \"be4ea031-db7a-49a8-a34b-46e2afef3c5e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lfkgt" Jan 22 10:36:28 crc kubenswrapper[4752]: I0122 10:36:28.976334 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/be4ea031-db7a-49a8-a34b-46e2afef3c5e-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lfkgt\" (UID: \"be4ea031-db7a-49a8-a34b-46e2afef3c5e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lfkgt" Jan 22 10:36:28 crc kubenswrapper[4752]: I0122 10:36:28.976406 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/be4ea031-db7a-49a8-a34b-46e2afef3c5e-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lfkgt\" (UID: \"be4ea031-db7a-49a8-a34b-46e2afef3c5e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lfkgt" Jan 22 10:36:28 crc kubenswrapper[4752]: I0122 10:36:28.976490 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zb95\" (UniqueName: \"kubernetes.io/projected/be4ea031-db7a-49a8-a34b-46e2afef3c5e-kube-api-access-8zb95\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lfkgt\" (UID: \"be4ea031-db7a-49a8-a34b-46e2afef3c5e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lfkgt" Jan 22 10:36:28 crc kubenswrapper[4752]: I0122 10:36:28.977115 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/be4ea031-db7a-49a8-a34b-46e2afef3c5e-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lfkgt\" (UID: \"be4ea031-db7a-49a8-a34b-46e2afef3c5e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lfkgt" Jan 22 10:36:28 crc kubenswrapper[4752]: I0122 10:36:28.977187 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/be4ea031-db7a-49a8-a34b-46e2afef3c5e-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lfkgt\" (UID: \"be4ea031-db7a-49a8-a34b-46e2afef3c5e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lfkgt" Jan 22 10:36:28 crc kubenswrapper[4752]: I0122 10:36:28.996535 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zb95\" (UniqueName: \"kubernetes.io/projected/be4ea031-db7a-49a8-a34b-46e2afef3c5e-kube-api-access-8zb95\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lfkgt\" (UID: \"be4ea031-db7a-49a8-a34b-46e2afef3c5e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lfkgt" Jan 22 10:36:29 crc kubenswrapper[4752]: I0122 10:36:29.005033 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lfkgt" Jan 22 10:36:29 crc kubenswrapper[4752]: I0122 10:36:29.237898 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lfkgt"] Jan 22 10:36:29 crc kubenswrapper[4752]: I0122 10:36:29.790053 4752 generic.go:334] "Generic (PLEG): container finished" podID="be4ea031-db7a-49a8-a34b-46e2afef3c5e" containerID="3b0190bf2978b9c0f9e36259d55c48a68c480f566a3baf9378c410621c6ef791" exitCode=0 Jan 22 10:36:29 crc kubenswrapper[4752]: I0122 10:36:29.790156 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lfkgt" event={"ID":"be4ea031-db7a-49a8-a34b-46e2afef3c5e","Type":"ContainerDied","Data":"3b0190bf2978b9c0f9e36259d55c48a68c480f566a3baf9378c410621c6ef791"} Jan 22 10:36:29 crc kubenswrapper[4752]: I0122 10:36:29.790203 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lfkgt" event={"ID":"be4ea031-db7a-49a8-a34b-46e2afef3c5e","Type":"ContainerStarted","Data":"5cba35d1c85249b492d5a9adf982de49675978ec2562cae30870e6d54825ad59"} Jan 22 10:36:40 crc kubenswrapper[4752]: I0122 10:36:40.874974 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lfkgt" event={"ID":"be4ea031-db7a-49a8-a34b-46e2afef3c5e","Type":"ContainerStarted","Data":"d6aedab5b17da089da6c5a49d22b51d3c5cf34f08240a6b933ca1f9d7b22f5c0"} Jan 22 10:36:41 crc kubenswrapper[4752]: I0122 10:36:41.885905 4752 generic.go:334] "Generic (PLEG): container finished" podID="be4ea031-db7a-49a8-a34b-46e2afef3c5e" containerID="d6aedab5b17da089da6c5a49d22b51d3c5cf34f08240a6b933ca1f9d7b22f5c0" exitCode=0 Jan 22 10:36:41 crc kubenswrapper[4752]: I0122 10:36:41.885954 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lfkgt" event={"ID":"be4ea031-db7a-49a8-a34b-46e2afef3c5e","Type":"ContainerDied","Data":"d6aedab5b17da089da6c5a49d22b51d3c5cf34f08240a6b933ca1f9d7b22f5c0"} Jan 22 10:36:42 crc kubenswrapper[4752]: I0122 10:36:42.893246 4752 generic.go:334] "Generic (PLEG): container finished" podID="be4ea031-db7a-49a8-a34b-46e2afef3c5e" containerID="8abc44d3ee87f127402ffa45971f7379fb7e6d29334675c9d703a100cdd34e2f" exitCode=0 Jan 22 10:36:42 crc kubenswrapper[4752]: I0122 10:36:42.893442 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lfkgt" event={"ID":"be4ea031-db7a-49a8-a34b-46e2afef3c5e","Type":"ContainerDied","Data":"8abc44d3ee87f127402ffa45971f7379fb7e6d29334675c9d703a100cdd34e2f"} Jan 22 10:36:44 crc kubenswrapper[4752]: I0122 10:36:44.209348 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lfkgt" Jan 22 10:36:44 crc kubenswrapper[4752]: I0122 10:36:44.307957 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/be4ea031-db7a-49a8-a34b-46e2afef3c5e-util\") pod \"be4ea031-db7a-49a8-a34b-46e2afef3c5e\" (UID: \"be4ea031-db7a-49a8-a34b-46e2afef3c5e\") " Jan 22 10:36:44 crc kubenswrapper[4752]: I0122 10:36:44.308086 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/be4ea031-db7a-49a8-a34b-46e2afef3c5e-bundle\") pod \"be4ea031-db7a-49a8-a34b-46e2afef3c5e\" (UID: \"be4ea031-db7a-49a8-a34b-46e2afef3c5e\") " Jan 22 10:36:44 crc kubenswrapper[4752]: I0122 10:36:44.308129 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zb95\" (UniqueName: \"kubernetes.io/projected/be4ea031-db7a-49a8-a34b-46e2afef3c5e-kube-api-access-8zb95\") pod \"be4ea031-db7a-49a8-a34b-46e2afef3c5e\" (UID: \"be4ea031-db7a-49a8-a34b-46e2afef3c5e\") " Jan 22 10:36:44 crc kubenswrapper[4752]: I0122 10:36:44.308953 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be4ea031-db7a-49a8-a34b-46e2afef3c5e-bundle" (OuterVolumeSpecName: "bundle") pod "be4ea031-db7a-49a8-a34b-46e2afef3c5e" (UID: "be4ea031-db7a-49a8-a34b-46e2afef3c5e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:36:44 crc kubenswrapper[4752]: I0122 10:36:44.315287 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be4ea031-db7a-49a8-a34b-46e2afef3c5e-kube-api-access-8zb95" (OuterVolumeSpecName: "kube-api-access-8zb95") pod "be4ea031-db7a-49a8-a34b-46e2afef3c5e" (UID: "be4ea031-db7a-49a8-a34b-46e2afef3c5e"). InnerVolumeSpecName "kube-api-access-8zb95". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:36:44 crc kubenswrapper[4752]: I0122 10:36:44.322216 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be4ea031-db7a-49a8-a34b-46e2afef3c5e-util" (OuterVolumeSpecName: "util") pod "be4ea031-db7a-49a8-a34b-46e2afef3c5e" (UID: "be4ea031-db7a-49a8-a34b-46e2afef3c5e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:36:44 crc kubenswrapper[4752]: I0122 10:36:44.409985 4752 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/be4ea031-db7a-49a8-a34b-46e2afef3c5e-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:36:44 crc kubenswrapper[4752]: I0122 10:36:44.410039 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zb95\" (UniqueName: \"kubernetes.io/projected/be4ea031-db7a-49a8-a34b-46e2afef3c5e-kube-api-access-8zb95\") on node \"crc\" DevicePath \"\"" Jan 22 10:36:44 crc kubenswrapper[4752]: I0122 10:36:44.410059 4752 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/be4ea031-db7a-49a8-a34b-46e2afef3c5e-util\") on node \"crc\" DevicePath \"\"" Jan 22 10:36:44 crc kubenswrapper[4752]: I0122 10:36:44.912167 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lfkgt" event={"ID":"be4ea031-db7a-49a8-a34b-46e2afef3c5e","Type":"ContainerDied","Data":"5cba35d1c85249b492d5a9adf982de49675978ec2562cae30870e6d54825ad59"} Jan 22 10:36:44 crc kubenswrapper[4752]: I0122 10:36:44.912227 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cba35d1c85249b492d5a9adf982de49675978ec2562cae30870e6d54825ad59" Jan 22 10:36:44 crc kubenswrapper[4752]: I0122 10:36:44.912287 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lfkgt" Jan 22 10:36:50 crc kubenswrapper[4752]: I0122 10:36:50.184232 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-bv6v7"] Jan 22 10:36:50 crc kubenswrapper[4752]: E0122 10:36:50.184738 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4ea031-db7a-49a8-a34b-46e2afef3c5e" containerName="extract" Jan 22 10:36:50 crc kubenswrapper[4752]: I0122 10:36:50.184751 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4ea031-db7a-49a8-a34b-46e2afef3c5e" containerName="extract" Jan 22 10:36:50 crc kubenswrapper[4752]: E0122 10:36:50.184764 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4ea031-db7a-49a8-a34b-46e2afef3c5e" containerName="pull" Jan 22 10:36:50 crc kubenswrapper[4752]: I0122 10:36:50.184769 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4ea031-db7a-49a8-a34b-46e2afef3c5e" containerName="pull" Jan 22 10:36:50 crc kubenswrapper[4752]: E0122 10:36:50.184778 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4ea031-db7a-49a8-a34b-46e2afef3c5e" containerName="util" Jan 22 10:36:50 crc kubenswrapper[4752]: I0122 10:36:50.184784 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4ea031-db7a-49a8-a34b-46e2afef3c5e" containerName="util" Jan 22 10:36:50 crc kubenswrapper[4752]: I0122 10:36:50.184925 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="be4ea031-db7a-49a8-a34b-46e2afef3c5e" containerName="extract" Jan 22 10:36:50 crc kubenswrapper[4752]: I0122 10:36:50.185304 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-bv6v7" Jan 22 10:36:50 crc kubenswrapper[4752]: I0122 10:36:50.188325 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 22 10:36:50 crc kubenswrapper[4752]: I0122 10:36:50.188936 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-scrmb" Jan 22 10:36:50 crc kubenswrapper[4752]: I0122 10:36:50.189320 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 22 10:36:50 crc kubenswrapper[4752]: I0122 10:36:50.206470 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-bv6v7"] Jan 22 10:36:50 crc kubenswrapper[4752]: I0122 10:36:50.287447 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfp8j\" (UniqueName: \"kubernetes.io/projected/0b7bd235-53fc-4d21-86c0-cf9ace622cca-kube-api-access-cfp8j\") pod \"nmstate-operator-646758c888-bv6v7\" (UID: \"0b7bd235-53fc-4d21-86c0-cf9ace622cca\") " pod="openshift-nmstate/nmstate-operator-646758c888-bv6v7" Jan 22 10:36:50 crc kubenswrapper[4752]: I0122 10:36:50.388930 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfp8j\" (UniqueName: \"kubernetes.io/projected/0b7bd235-53fc-4d21-86c0-cf9ace622cca-kube-api-access-cfp8j\") pod \"nmstate-operator-646758c888-bv6v7\" (UID: \"0b7bd235-53fc-4d21-86c0-cf9ace622cca\") " pod="openshift-nmstate/nmstate-operator-646758c888-bv6v7" Jan 22 10:36:50 crc kubenswrapper[4752]: I0122 10:36:50.408966 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfp8j\" (UniqueName: \"kubernetes.io/projected/0b7bd235-53fc-4d21-86c0-cf9ace622cca-kube-api-access-cfp8j\") pod \"nmstate-operator-646758c888-bv6v7\" (UID: \"0b7bd235-53fc-4d21-86c0-cf9ace622cca\") " pod="openshift-nmstate/nmstate-operator-646758c888-bv6v7" Jan 22 10:36:50 crc kubenswrapper[4752]: I0122 10:36:50.504079 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-bv6v7" Jan 22 10:36:50 crc kubenswrapper[4752]: I0122 10:36:50.722095 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-bv6v7"] Jan 22 10:36:50 crc kubenswrapper[4752]: I0122 10:36:50.953901 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-bv6v7" event={"ID":"0b7bd235-53fc-4d21-86c0-cf9ace622cca","Type":"ContainerStarted","Data":"42afe54ce5d19ddee14c68b53e0ff693435af6345842c3fad1cd13d5d96d8474"} Jan 22 10:36:52 crc kubenswrapper[4752]: I0122 10:36:52.968987 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-bv6v7" event={"ID":"0b7bd235-53fc-4d21-86c0-cf9ace622cca","Type":"ContainerStarted","Data":"f9889db579a646f8c19b7b18725fc1605d3d43a7b874a1ded9d8fca92d0fd06c"} Jan 22 10:36:52 crc kubenswrapper[4752]: I0122 10:36:52.993918 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-bv6v7" podStartSLOduration=1.123277914 podStartE2EDuration="2.993895669s" podCreationTimestamp="2026-01-22 10:36:50 +0000 UTC" firstStartedPulling="2026-01-22 10:36:50.729923798 +0000 UTC m=+689.959866706" lastFinishedPulling="2026-01-22 10:36:52.600541553 +0000 UTC m=+691.830484461" observedRunningTime="2026-01-22 10:36:52.990261093 +0000 UTC m=+692.220204011" watchObservedRunningTime="2026-01-22 10:36:52.993895669 +0000 UTC m=+692.223838597" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.329722 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-bwfqg"] Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.331179 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bwfqg" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.335922 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-h94zs"] Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.337058 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-h94zs" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.337827 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.339135 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-229ks" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.359947 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-s65hc"] Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.360897 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-s65hc" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.367008 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-h94zs"] Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.374537 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-bwfqg"] Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.438719 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c3a36876-437a-44a1-b61c-ea81f242b231-dbus-socket\") pod \"nmstate-handler-s65hc\" (UID: \"c3a36876-437a-44a1-b61c-ea81f242b231\") " pod="openshift-nmstate/nmstate-handler-s65hc" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.438775 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6d8l\" (UniqueName: \"kubernetes.io/projected/2511f3bb-279b-4089-8786-73b70066fed9-kube-api-access-n6d8l\") pod \"nmstate-webhook-8474b5b9d8-bwfqg\" (UID: \"2511f3bb-279b-4089-8786-73b70066fed9\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bwfqg" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.438865 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z9qh\" (UniqueName: \"kubernetes.io/projected/c3a36876-437a-44a1-b61c-ea81f242b231-kube-api-access-5z9qh\") pod \"nmstate-handler-s65hc\" (UID: \"c3a36876-437a-44a1-b61c-ea81f242b231\") " pod="openshift-nmstate/nmstate-handler-s65hc" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.438887 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2511f3bb-279b-4089-8786-73b70066fed9-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-bwfqg\" (UID: \"2511f3bb-279b-4089-8786-73b70066fed9\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bwfqg" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.438947 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c3a36876-437a-44a1-b61c-ea81f242b231-nmstate-lock\") pod \"nmstate-handler-s65hc\" (UID: \"c3a36876-437a-44a1-b61c-ea81f242b231\") " pod="openshift-nmstate/nmstate-handler-s65hc" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.438987 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c3a36876-437a-44a1-b61c-ea81f242b231-ovs-socket\") pod \"nmstate-handler-s65hc\" (UID: \"c3a36876-437a-44a1-b61c-ea81f242b231\") " pod="openshift-nmstate/nmstate-handler-s65hc" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.439160 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bp8d\" (UniqueName: \"kubernetes.io/projected/4d9abbf7-dc37-46b4-9436-292030be519d-kube-api-access-7bp8d\") pod \"nmstate-metrics-54757c584b-h94zs\" (UID: \"4d9abbf7-dc37-46b4-9436-292030be519d\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-h94zs" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.493870 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-lk5jg"] Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.494932 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lk5jg" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.498193 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.498358 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.498500 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-tg4sb" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.509171 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-lk5jg"] Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.541741 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c3a36876-437a-44a1-b61c-ea81f242b231-ovs-socket\") pod \"nmstate-handler-s65hc\" (UID: \"c3a36876-437a-44a1-b61c-ea81f242b231\") " pod="openshift-nmstate/nmstate-handler-s65hc" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.541804 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bp8d\" (UniqueName: \"kubernetes.io/projected/4d9abbf7-dc37-46b4-9436-292030be519d-kube-api-access-7bp8d\") pod \"nmstate-metrics-54757c584b-h94zs\" (UID: \"4d9abbf7-dc37-46b4-9436-292030be519d\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-h94zs" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.541839 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5slzv\" (UniqueName: \"kubernetes.io/projected/39daa1b5-8210-402f-b888-34f438a5435e-kube-api-access-5slzv\") pod \"nmstate-console-plugin-7754f76f8b-lk5jg\" (UID: \"39daa1b5-8210-402f-b888-34f438a5435e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lk5jg" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.541879 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/39daa1b5-8210-402f-b888-34f438a5435e-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-lk5jg\" (UID: \"39daa1b5-8210-402f-b888-34f438a5435e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lk5jg" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.541904 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/39daa1b5-8210-402f-b888-34f438a5435e-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-lk5jg\" (UID: \"39daa1b5-8210-402f-b888-34f438a5435e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lk5jg" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.541933 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6d8l\" (UniqueName: \"kubernetes.io/projected/2511f3bb-279b-4089-8786-73b70066fed9-kube-api-access-n6d8l\") pod \"nmstate-webhook-8474b5b9d8-bwfqg\" (UID: \"2511f3bb-279b-4089-8786-73b70066fed9\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bwfqg" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.541932 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c3a36876-437a-44a1-b61c-ea81f242b231-ovs-socket\") pod \"nmstate-handler-s65hc\" (UID: \"c3a36876-437a-44a1-b61c-ea81f242b231\") " pod="openshift-nmstate/nmstate-handler-s65hc" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.541994 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c3a36876-437a-44a1-b61c-ea81f242b231-dbus-socket\") pod \"nmstate-handler-s65hc\" (UID: \"c3a36876-437a-44a1-b61c-ea81f242b231\") " pod="openshift-nmstate/nmstate-handler-s65hc" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.542054 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z9qh\" (UniqueName: \"kubernetes.io/projected/c3a36876-437a-44a1-b61c-ea81f242b231-kube-api-access-5z9qh\") pod \"nmstate-handler-s65hc\" (UID: \"c3a36876-437a-44a1-b61c-ea81f242b231\") " pod="openshift-nmstate/nmstate-handler-s65hc" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.542083 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2511f3bb-279b-4089-8786-73b70066fed9-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-bwfqg\" (UID: \"2511f3bb-279b-4089-8786-73b70066fed9\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bwfqg" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.542123 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c3a36876-437a-44a1-b61c-ea81f242b231-nmstate-lock\") pod \"nmstate-handler-s65hc\" (UID: \"c3a36876-437a-44a1-b61c-ea81f242b231\") " pod="openshift-nmstate/nmstate-handler-s65hc" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.542262 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c3a36876-437a-44a1-b61c-ea81f242b231-dbus-socket\") pod \"nmstate-handler-s65hc\" (UID: \"c3a36876-437a-44a1-b61c-ea81f242b231\") " pod="openshift-nmstate/nmstate-handler-s65hc" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.544046 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c3a36876-437a-44a1-b61c-ea81f242b231-nmstate-lock\") pod \"nmstate-handler-s65hc\" (UID: \"c3a36876-437a-44a1-b61c-ea81f242b231\") " pod="openshift-nmstate/nmstate-handler-s65hc" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.555236 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2511f3bb-279b-4089-8786-73b70066fed9-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-bwfqg\" (UID: \"2511f3bb-279b-4089-8786-73b70066fed9\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bwfqg" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.559762 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bp8d\" (UniqueName: \"kubernetes.io/projected/4d9abbf7-dc37-46b4-9436-292030be519d-kube-api-access-7bp8d\") pod \"nmstate-metrics-54757c584b-h94zs\" (UID: \"4d9abbf7-dc37-46b4-9436-292030be519d\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-h94zs" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.560873 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z9qh\" (UniqueName: \"kubernetes.io/projected/c3a36876-437a-44a1-b61c-ea81f242b231-kube-api-access-5z9qh\") pod \"nmstate-handler-s65hc\" (UID: \"c3a36876-437a-44a1-b61c-ea81f242b231\") " pod="openshift-nmstate/nmstate-handler-s65hc" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.561004 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6d8l\" (UniqueName: \"kubernetes.io/projected/2511f3bb-279b-4089-8786-73b70066fed9-kube-api-access-n6d8l\") pod \"nmstate-webhook-8474b5b9d8-bwfqg\" (UID: \"2511f3bb-279b-4089-8786-73b70066fed9\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bwfqg" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.645412 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5slzv\" (UniqueName: \"kubernetes.io/projected/39daa1b5-8210-402f-b888-34f438a5435e-kube-api-access-5slzv\") pod \"nmstate-console-plugin-7754f76f8b-lk5jg\" (UID: \"39daa1b5-8210-402f-b888-34f438a5435e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lk5jg" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.645820 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/39daa1b5-8210-402f-b888-34f438a5435e-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-lk5jg\" (UID: \"39daa1b5-8210-402f-b888-34f438a5435e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lk5jg" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.645869 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/39daa1b5-8210-402f-b888-34f438a5435e-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-lk5jg\" (UID: \"39daa1b5-8210-402f-b888-34f438a5435e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lk5jg" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.650216 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/39daa1b5-8210-402f-b888-34f438a5435e-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-lk5jg\" (UID: \"39daa1b5-8210-402f-b888-34f438a5435e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lk5jg" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.650719 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bwfqg" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.661200 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-h94zs" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.671043 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/39daa1b5-8210-402f-b888-34f438a5435e-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-lk5jg\" (UID: \"39daa1b5-8210-402f-b888-34f438a5435e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lk5jg" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.673487 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-69d8647985-6mmnj"] Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.674287 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69d8647985-6mmnj" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.679246 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5slzv\" (UniqueName: \"kubernetes.io/projected/39daa1b5-8210-402f-b888-34f438a5435e-kube-api-access-5slzv\") pod \"nmstate-console-plugin-7754f76f8b-lk5jg\" (UID: \"39daa1b5-8210-402f-b888-34f438a5435e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lk5jg" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.689158 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69d8647985-6mmnj"] Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.711026 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-s65hc" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.747717 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ddc61973-5262-48ff-a2d8-2819c420e135-oauth-serving-cert\") pod \"console-69d8647985-6mmnj\" (UID: \"ddc61973-5262-48ff-a2d8-2819c420e135\") " pod="openshift-console/console-69d8647985-6mmnj" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.747902 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ddc61973-5262-48ff-a2d8-2819c420e135-console-oauth-config\") pod \"console-69d8647985-6mmnj\" (UID: \"ddc61973-5262-48ff-a2d8-2819c420e135\") " pod="openshift-console/console-69d8647985-6mmnj" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.747941 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ddc61973-5262-48ff-a2d8-2819c420e135-console-config\") pod \"console-69d8647985-6mmnj\" (UID: \"ddc61973-5262-48ff-a2d8-2819c420e135\") " pod="openshift-console/console-69d8647985-6mmnj" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.748000 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j697\" (UniqueName: \"kubernetes.io/projected/ddc61973-5262-48ff-a2d8-2819c420e135-kube-api-access-2j697\") pod \"console-69d8647985-6mmnj\" (UID: \"ddc61973-5262-48ff-a2d8-2819c420e135\") " pod="openshift-console/console-69d8647985-6mmnj" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.748055 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ddc61973-5262-48ff-a2d8-2819c420e135-trusted-ca-bundle\") pod \"console-69d8647985-6mmnj\" (UID: \"ddc61973-5262-48ff-a2d8-2819c420e135\") " pod="openshift-console/console-69d8647985-6mmnj" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.748092 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ddc61973-5262-48ff-a2d8-2819c420e135-console-serving-cert\") pod \"console-69d8647985-6mmnj\" (UID: \"ddc61973-5262-48ff-a2d8-2819c420e135\") " pod="openshift-console/console-69d8647985-6mmnj" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.748157 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ddc61973-5262-48ff-a2d8-2819c420e135-service-ca\") pod \"console-69d8647985-6mmnj\" (UID: \"ddc61973-5262-48ff-a2d8-2819c420e135\") " pod="openshift-console/console-69d8647985-6mmnj" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.819509 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lk5jg" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.850087 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ddc61973-5262-48ff-a2d8-2819c420e135-oauth-serving-cert\") pod \"console-69d8647985-6mmnj\" (UID: \"ddc61973-5262-48ff-a2d8-2819c420e135\") " pod="openshift-console/console-69d8647985-6mmnj" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.850145 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ddc61973-5262-48ff-a2d8-2819c420e135-console-oauth-config\") pod \"console-69d8647985-6mmnj\" (UID: \"ddc61973-5262-48ff-a2d8-2819c420e135\") " pod="openshift-console/console-69d8647985-6mmnj" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.850179 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ddc61973-5262-48ff-a2d8-2819c420e135-console-config\") pod \"console-69d8647985-6mmnj\" (UID: \"ddc61973-5262-48ff-a2d8-2819c420e135\") " pod="openshift-console/console-69d8647985-6mmnj" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.850209 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j697\" (UniqueName: \"kubernetes.io/projected/ddc61973-5262-48ff-a2d8-2819c420e135-kube-api-access-2j697\") pod \"console-69d8647985-6mmnj\" (UID: \"ddc61973-5262-48ff-a2d8-2819c420e135\") " pod="openshift-console/console-69d8647985-6mmnj" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.850243 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ddc61973-5262-48ff-a2d8-2819c420e135-trusted-ca-bundle\") pod \"console-69d8647985-6mmnj\" (UID: \"ddc61973-5262-48ff-a2d8-2819c420e135\") " pod="openshift-console/console-69d8647985-6mmnj" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.850284 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ddc61973-5262-48ff-a2d8-2819c420e135-console-serving-cert\") pod \"console-69d8647985-6mmnj\" (UID: \"ddc61973-5262-48ff-a2d8-2819c420e135\") " pod="openshift-console/console-69d8647985-6mmnj" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.850315 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ddc61973-5262-48ff-a2d8-2819c420e135-service-ca\") pod \"console-69d8647985-6mmnj\" (UID: \"ddc61973-5262-48ff-a2d8-2819c420e135\") " pod="openshift-console/console-69d8647985-6mmnj" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.851199 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ddc61973-5262-48ff-a2d8-2819c420e135-service-ca\") pod \"console-69d8647985-6mmnj\" (UID: \"ddc61973-5262-48ff-a2d8-2819c420e135\") " pod="openshift-console/console-69d8647985-6mmnj" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.852061 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ddc61973-5262-48ff-a2d8-2819c420e135-oauth-serving-cert\") pod \"console-69d8647985-6mmnj\" (UID: \"ddc61973-5262-48ff-a2d8-2819c420e135\") " pod="openshift-console/console-69d8647985-6mmnj" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.853138 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ddc61973-5262-48ff-a2d8-2819c420e135-console-config\") pod \"console-69d8647985-6mmnj\" (UID: \"ddc61973-5262-48ff-a2d8-2819c420e135\") " pod="openshift-console/console-69d8647985-6mmnj" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.856916 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ddc61973-5262-48ff-a2d8-2819c420e135-trusted-ca-bundle\") pod \"console-69d8647985-6mmnj\" (UID: \"ddc61973-5262-48ff-a2d8-2819c420e135\") " pod="openshift-console/console-69d8647985-6mmnj" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.856936 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ddc61973-5262-48ff-a2d8-2819c420e135-console-serving-cert\") pod \"console-69d8647985-6mmnj\" (UID: \"ddc61973-5262-48ff-a2d8-2819c420e135\") " pod="openshift-console/console-69d8647985-6mmnj" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.857198 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ddc61973-5262-48ff-a2d8-2819c420e135-console-oauth-config\") pod \"console-69d8647985-6mmnj\" (UID: \"ddc61973-5262-48ff-a2d8-2819c420e135\") " pod="openshift-console/console-69d8647985-6mmnj" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.874241 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j697\" (UniqueName: \"kubernetes.io/projected/ddc61973-5262-48ff-a2d8-2819c420e135-kube-api-access-2j697\") pod \"console-69d8647985-6mmnj\" (UID: \"ddc61973-5262-48ff-a2d8-2819c420e135\") " pod="openshift-console/console-69d8647985-6mmnj" Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.937142 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-h94zs"] Jan 22 10:37:00 crc kubenswrapper[4752]: I0122 10:37:00.989654 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-bwfqg"] Jan 22 10:37:00 crc kubenswrapper[4752]: W0122 10:37:00.998076 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2511f3bb_279b_4089_8786_73b70066fed9.slice/crio-7569cb04d966624d0c58541de930dece7c466b2f5ce2f831dfc6e8ac38e161da WatchSource:0}: Error finding container 7569cb04d966624d0c58541de930dece7c466b2f5ce2f831dfc6e8ac38e161da: Status 404 returned error can't find the container with id 7569cb04d966624d0c58541de930dece7c466b2f5ce2f831dfc6e8ac38e161da Jan 22 10:37:01 crc kubenswrapper[4752]: I0122 10:37:01.016914 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-h94zs" event={"ID":"4d9abbf7-dc37-46b4-9436-292030be519d","Type":"ContainerStarted","Data":"f2ac4751b8f33a6b6b1df12944efcd670c37d2148c41efb397f40844d3444dbc"} Jan 22 10:37:01 crc kubenswrapper[4752]: I0122 10:37:01.018130 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-s65hc" event={"ID":"c3a36876-437a-44a1-b61c-ea81f242b231","Type":"ContainerStarted","Data":"836417809deda4c4e11f930ed043dc1b33905658ed2ae93a4d5e89a87487413e"} Jan 22 10:37:01 crc kubenswrapper[4752]: I0122 10:37:01.019225 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bwfqg" event={"ID":"2511f3bb-279b-4089-8786-73b70066fed9","Type":"ContainerStarted","Data":"7569cb04d966624d0c58541de930dece7c466b2f5ce2f831dfc6e8ac38e161da"} Jan 22 10:37:01 crc kubenswrapper[4752]: I0122 10:37:01.045427 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-lk5jg"] Jan 22 10:37:01 crc kubenswrapper[4752]: I0122 10:37:01.080206 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69d8647985-6mmnj" Jan 22 10:37:01 crc kubenswrapper[4752]: I0122 10:37:01.496695 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69d8647985-6mmnj"] Jan 22 10:37:01 crc kubenswrapper[4752]: W0122 10:37:01.520660 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddc61973_5262_48ff_a2d8_2819c420e135.slice/crio-f88f870935d4e1b93f1a9a1502749c84be3ad2feb982785fd9bd0f00bb45bf71 WatchSource:0}: Error finding container f88f870935d4e1b93f1a9a1502749c84be3ad2feb982785fd9bd0f00bb45bf71: Status 404 returned error can't find the container with id f88f870935d4e1b93f1a9a1502749c84be3ad2feb982785fd9bd0f00bb45bf71 Jan 22 10:37:02 crc kubenswrapper[4752]: I0122 10:37:02.028790 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lk5jg" event={"ID":"39daa1b5-8210-402f-b888-34f438a5435e","Type":"ContainerStarted","Data":"34e0f5696999fd7eeb5b136f40e4c858a5641cb431d8a2551aeee6b790e6da59"} Jan 22 10:37:02 crc kubenswrapper[4752]: I0122 10:37:02.033457 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69d8647985-6mmnj" event={"ID":"ddc61973-5262-48ff-a2d8-2819c420e135","Type":"ContainerStarted","Data":"848f8143f07524b36d9a80c49c1a5a8d58bbcdec02c0e648fdff2e8e39830d6c"} Jan 22 10:37:02 crc kubenswrapper[4752]: I0122 10:37:02.033499 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69d8647985-6mmnj" event={"ID":"ddc61973-5262-48ff-a2d8-2819c420e135","Type":"ContainerStarted","Data":"f88f870935d4e1b93f1a9a1502749c84be3ad2feb982785fd9bd0f00bb45bf71"} Jan 22 10:37:04 crc kubenswrapper[4752]: I0122 10:37:04.052572 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bwfqg" event={"ID":"2511f3bb-279b-4089-8786-73b70066fed9","Type":"ContainerStarted","Data":"80be08e84a31c0ea984dbdd1a4702688244d49355649cb430b78d3231215b27e"} Jan 22 10:37:04 crc kubenswrapper[4752]: I0122 10:37:04.055036 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bwfqg" Jan 22 10:37:04 crc kubenswrapper[4752]: I0122 10:37:04.055229 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-h94zs" event={"ID":"4d9abbf7-dc37-46b4-9436-292030be519d","Type":"ContainerStarted","Data":"31f25e339111e370204e3419d1094dbf3fa5402535576f8ab9ab4af476d8499d"} Jan 22 10:37:04 crc kubenswrapper[4752]: I0122 10:37:04.067563 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lk5jg" event={"ID":"39daa1b5-8210-402f-b888-34f438a5435e","Type":"ContainerStarted","Data":"dbf7840e09e36cf5326a6d3a69d065342c870e89c9a06a78477d262ccbf082ff"} Jan 22 10:37:04 crc kubenswrapper[4752]: I0122 10:37:04.085740 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bwfqg" podStartSLOduration=1.389772588 podStartE2EDuration="4.085719828s" podCreationTimestamp="2026-01-22 10:37:00 +0000 UTC" firstStartedPulling="2026-01-22 10:37:00.999540465 +0000 UTC m=+700.229483363" lastFinishedPulling="2026-01-22 10:37:03.695487655 +0000 UTC m=+702.925430603" observedRunningTime="2026-01-22 10:37:04.079773531 +0000 UTC m=+703.309716469" watchObservedRunningTime="2026-01-22 10:37:04.085719828 +0000 UTC m=+703.315662736" Jan 22 10:37:04 crc kubenswrapper[4752]: I0122 10:37:04.088648 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-69d8647985-6mmnj" podStartSLOduration=4.088638326 podStartE2EDuration="4.088638326s" podCreationTimestamp="2026-01-22 10:37:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:37:02.07074436 +0000 UTC m=+701.300687268" watchObservedRunningTime="2026-01-22 10:37:04.088638326 +0000 UTC m=+703.318581234" Jan 22 10:37:04 crc kubenswrapper[4752]: I0122 10:37:04.098571 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lk5jg" podStartSLOduration=1.457294977 podStartE2EDuration="4.098551368s" podCreationTimestamp="2026-01-22 10:37:00 +0000 UTC" firstStartedPulling="2026-01-22 10:37:01.052519378 +0000 UTC m=+700.282462286" lastFinishedPulling="2026-01-22 10:37:03.693775759 +0000 UTC m=+702.923718677" observedRunningTime="2026-01-22 10:37:04.097782408 +0000 UTC m=+703.327725316" watchObservedRunningTime="2026-01-22 10:37:04.098551368 +0000 UTC m=+703.328494276" Jan 22 10:37:05 crc kubenswrapper[4752]: I0122 10:37:05.078454 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-s65hc" event={"ID":"c3a36876-437a-44a1-b61c-ea81f242b231","Type":"ContainerStarted","Data":"fe99fe83e62a2ff0fca5352489fb7fceeedda71f9f36abb0b30cd827d46628c9"} Jan 22 10:37:05 crc kubenswrapper[4752]: I0122 10:37:05.078883 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-s65hc" Jan 22 10:37:05 crc kubenswrapper[4752]: I0122 10:37:05.096840 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-s65hc" podStartSLOduration=2.150209396 podStartE2EDuration="5.096820473s" podCreationTimestamp="2026-01-22 10:37:00 +0000 UTC" firstStartedPulling="2026-01-22 10:37:00.760440794 +0000 UTC m=+699.990383702" lastFinishedPulling="2026-01-22 10:37:03.707051871 +0000 UTC m=+702.936994779" observedRunningTime="2026-01-22 10:37:05.096286029 +0000 UTC m=+704.326228937" watchObservedRunningTime="2026-01-22 10:37:05.096820473 +0000 UTC m=+704.326763381" Jan 22 10:37:06 crc kubenswrapper[4752]: I0122 10:37:06.313055 4752 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 10:37:07 crc kubenswrapper[4752]: I0122 10:37:07.095489 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-h94zs" event={"ID":"4d9abbf7-dc37-46b4-9436-292030be519d","Type":"ContainerStarted","Data":"c7893b2337bc372b83b8d53c5d763bc92d50b3a2776bb733212485ba327fa3ed"} Jan 22 10:37:07 crc kubenswrapper[4752]: I0122 10:37:07.118718 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-h94zs" podStartSLOduration=2.101942728 podStartE2EDuration="7.118692943s" podCreationTimestamp="2026-01-22 10:37:00 +0000 UTC" firstStartedPulling="2026-01-22 10:37:00.943872231 +0000 UTC m=+700.173815139" lastFinishedPulling="2026-01-22 10:37:05.960622446 +0000 UTC m=+705.190565354" observedRunningTime="2026-01-22 10:37:07.117526502 +0000 UTC m=+706.347469480" watchObservedRunningTime="2026-01-22 10:37:07.118692943 +0000 UTC m=+706.348635891" Jan 22 10:37:10 crc kubenswrapper[4752]: I0122 10:37:10.753639 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-s65hc" Jan 22 10:37:11 crc kubenswrapper[4752]: I0122 10:37:11.081094 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-69d8647985-6mmnj" Jan 22 10:37:11 crc kubenswrapper[4752]: I0122 10:37:11.082003 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-69d8647985-6mmnj" Jan 22 10:37:11 crc kubenswrapper[4752]: I0122 10:37:11.087033 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-69d8647985-6mmnj" Jan 22 10:37:11 crc kubenswrapper[4752]: I0122 10:37:11.143952 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-69d8647985-6mmnj" Jan 22 10:37:11 crc kubenswrapper[4752]: I0122 10:37:11.209425 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-jh6kp"] Jan 22 10:37:20 crc kubenswrapper[4752]: I0122 10:37:20.657279 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bwfqg" Jan 22 10:37:35 crc kubenswrapper[4752]: I0122 10:37:35.597278 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6tb7m"] Jan 22 10:37:35 crc kubenswrapper[4752]: I0122 10:37:35.598781 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6tb7m" Jan 22 10:37:35 crc kubenswrapper[4752]: I0122 10:37:35.601707 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 22 10:37:35 crc kubenswrapper[4752]: I0122 10:37:35.615442 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6tb7m"] Jan 22 10:37:35 crc kubenswrapper[4752]: I0122 10:37:35.651990 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6tb7m\" (UID: \"dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6tb7m" Jan 22 10:37:35 crc kubenswrapper[4752]: I0122 10:37:35.652093 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6tb7m\" (UID: \"dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6tb7m" Jan 22 10:37:35 crc kubenswrapper[4752]: I0122 10:37:35.652127 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n25mh\" (UniqueName: \"kubernetes.io/projected/dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8-kube-api-access-n25mh\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6tb7m\" (UID: \"dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6tb7m" Jan 22 10:37:35 crc kubenswrapper[4752]: I0122 10:37:35.753469 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6tb7m\" (UID: \"dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6tb7m" Jan 22 10:37:35 crc kubenswrapper[4752]: I0122 10:37:35.753842 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6tb7m\" (UID: \"dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6tb7m" Jan 22 10:37:35 crc kubenswrapper[4752]: I0122 10:37:35.753884 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n25mh\" (UniqueName: \"kubernetes.io/projected/dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8-kube-api-access-n25mh\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6tb7m\" (UID: \"dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6tb7m" Jan 22 10:37:35 crc kubenswrapper[4752]: I0122 10:37:35.754131 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6tb7m\" (UID: \"dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6tb7m" Jan 22 10:37:35 crc kubenswrapper[4752]: I0122 10:37:35.754371 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6tb7m\" (UID: \"dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6tb7m" Jan 22 10:37:35 crc kubenswrapper[4752]: I0122 10:37:35.783672 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n25mh\" (UniqueName: \"kubernetes.io/projected/dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8-kube-api-access-n25mh\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6tb7m\" (UID: \"dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6tb7m" Jan 22 10:37:35 crc kubenswrapper[4752]: I0122 10:37:35.946798 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6tb7m" Jan 22 10:37:36 crc kubenswrapper[4752]: I0122 10:37:36.157127 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6tb7m"] Jan 22 10:37:36 crc kubenswrapper[4752]: I0122 10:37:36.253431 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-jh6kp" podUID="b82cc492-857e-4eaf-8e18-87e830bdc9f6" containerName="console" containerID="cri-o://43d45d54eaf1451800296d1859794b3a5a84fb2b4bf6984616e2abbed01b0a31" gracePeriod=15 Jan 22 10:37:36 crc kubenswrapper[4752]: I0122 10:37:36.327146 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6tb7m" event={"ID":"dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8","Type":"ContainerStarted","Data":"524fabfc89a044eb81931c0be3d4673a1f7f8d7af257348adf1152eb4ac0b754"} Jan 22 10:37:36 crc kubenswrapper[4752]: I0122 10:37:36.327263 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6tb7m" event={"ID":"dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8","Type":"ContainerStarted","Data":"fa6f591e2fa49d2aeb94d7a705c5599cf575417a5b81fbb33d0e4b2eeece2604"} Jan 22 10:37:36 crc kubenswrapper[4752]: I0122 10:37:36.570141 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-jh6kp_b82cc492-857e-4eaf-8e18-87e830bdc9f6/console/0.log" Jan 22 10:37:36 crc kubenswrapper[4752]: I0122 10:37:36.570443 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:37:36 crc kubenswrapper[4752]: I0122 10:37:36.665051 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b82cc492-857e-4eaf-8e18-87e830bdc9f6-console-oauth-config\") pod \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\" (UID: \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\") " Jan 22 10:37:36 crc kubenswrapper[4752]: I0122 10:37:36.665142 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-console-config\") pod \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\" (UID: \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\") " Jan 22 10:37:36 crc kubenswrapper[4752]: I0122 10:37:36.665162 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-trusted-ca-bundle\") pod \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\" (UID: \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\") " Jan 22 10:37:36 crc kubenswrapper[4752]: I0122 10:37:36.665178 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-oauth-serving-cert\") pod \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\" (UID: \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\") " Jan 22 10:37:36 crc kubenswrapper[4752]: I0122 10:37:36.665197 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-service-ca\") pod \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\" (UID: \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\") " Jan 22 10:37:36 crc kubenswrapper[4752]: I0122 10:37:36.665284 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b82cc492-857e-4eaf-8e18-87e830bdc9f6-console-serving-cert\") pod \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\" (UID: \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\") " Jan 22 10:37:36 crc kubenswrapper[4752]: I0122 10:37:36.665323 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfwf9\" (UniqueName: \"kubernetes.io/projected/b82cc492-857e-4eaf-8e18-87e830bdc9f6-kube-api-access-sfwf9\") pod \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\" (UID: \"b82cc492-857e-4eaf-8e18-87e830bdc9f6\") " Jan 22 10:37:36 crc kubenswrapper[4752]: I0122 10:37:36.666472 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "b82cc492-857e-4eaf-8e18-87e830bdc9f6" (UID: "b82cc492-857e-4eaf-8e18-87e830bdc9f6"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:36 crc kubenswrapper[4752]: I0122 10:37:36.666486 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "b82cc492-857e-4eaf-8e18-87e830bdc9f6" (UID: "b82cc492-857e-4eaf-8e18-87e830bdc9f6"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:36 crc kubenswrapper[4752]: I0122 10:37:36.666767 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-service-ca" (OuterVolumeSpecName: "service-ca") pod "b82cc492-857e-4eaf-8e18-87e830bdc9f6" (UID: "b82cc492-857e-4eaf-8e18-87e830bdc9f6"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:36 crc kubenswrapper[4752]: I0122 10:37:36.667504 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-console-config" (OuterVolumeSpecName: "console-config") pod "b82cc492-857e-4eaf-8e18-87e830bdc9f6" (UID: "b82cc492-857e-4eaf-8e18-87e830bdc9f6"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:37:36 crc kubenswrapper[4752]: I0122 10:37:36.671236 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b82cc492-857e-4eaf-8e18-87e830bdc9f6-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "b82cc492-857e-4eaf-8e18-87e830bdc9f6" (UID: "b82cc492-857e-4eaf-8e18-87e830bdc9f6"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:36 crc kubenswrapper[4752]: I0122 10:37:36.671266 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b82cc492-857e-4eaf-8e18-87e830bdc9f6-kube-api-access-sfwf9" (OuterVolumeSpecName: "kube-api-access-sfwf9") pod "b82cc492-857e-4eaf-8e18-87e830bdc9f6" (UID: "b82cc492-857e-4eaf-8e18-87e830bdc9f6"). InnerVolumeSpecName "kube-api-access-sfwf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:36 crc kubenswrapper[4752]: I0122 10:37:36.671903 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b82cc492-857e-4eaf-8e18-87e830bdc9f6-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "b82cc492-857e-4eaf-8e18-87e830bdc9f6" (UID: "b82cc492-857e-4eaf-8e18-87e830bdc9f6"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:37:36 crc kubenswrapper[4752]: I0122 10:37:36.766819 4752 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b82cc492-857e-4eaf-8e18-87e830bdc9f6-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:36 crc kubenswrapper[4752]: I0122 10:37:36.766918 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfwf9\" (UniqueName: \"kubernetes.io/projected/b82cc492-857e-4eaf-8e18-87e830bdc9f6-kube-api-access-sfwf9\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:36 crc kubenswrapper[4752]: I0122 10:37:36.766949 4752 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b82cc492-857e-4eaf-8e18-87e830bdc9f6-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:36 crc kubenswrapper[4752]: I0122 10:37:36.766973 4752 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-console-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:36 crc kubenswrapper[4752]: I0122 10:37:36.766998 4752 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:36 crc kubenswrapper[4752]: I0122 10:37:36.767023 4752 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:36 crc kubenswrapper[4752]: I0122 10:37:36.767047 4752 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b82cc492-857e-4eaf-8e18-87e830bdc9f6-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:37 crc kubenswrapper[4752]: I0122 10:37:37.336366 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6tb7m" event={"ID":"dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8","Type":"ContainerDied","Data":"524fabfc89a044eb81931c0be3d4673a1f7f8d7af257348adf1152eb4ac0b754"} Jan 22 10:37:37 crc kubenswrapper[4752]: I0122 10:37:37.336297 4752 generic.go:334] "Generic (PLEG): container finished" podID="dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8" containerID="524fabfc89a044eb81931c0be3d4673a1f7f8d7af257348adf1152eb4ac0b754" exitCode=0 Jan 22 10:37:37 crc kubenswrapper[4752]: I0122 10:37:37.340226 4752 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-jh6kp_b82cc492-857e-4eaf-8e18-87e830bdc9f6/console/0.log" Jan 22 10:37:37 crc kubenswrapper[4752]: I0122 10:37:37.340297 4752 generic.go:334] "Generic (PLEG): container finished" podID="b82cc492-857e-4eaf-8e18-87e830bdc9f6" containerID="43d45d54eaf1451800296d1859794b3a5a84fb2b4bf6984616e2abbed01b0a31" exitCode=2 Jan 22 10:37:37 crc kubenswrapper[4752]: I0122 10:37:37.340389 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jh6kp" Jan 22 10:37:37 crc kubenswrapper[4752]: I0122 10:37:37.340381 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jh6kp" event={"ID":"b82cc492-857e-4eaf-8e18-87e830bdc9f6","Type":"ContainerDied","Data":"43d45d54eaf1451800296d1859794b3a5a84fb2b4bf6984616e2abbed01b0a31"} Jan 22 10:37:37 crc kubenswrapper[4752]: I0122 10:37:37.340486 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jh6kp" event={"ID":"b82cc492-857e-4eaf-8e18-87e830bdc9f6","Type":"ContainerDied","Data":"05de6e20d504002b041892e8bf6c3732230d7b77fe46f3def808ed43dda88b69"} Jan 22 10:37:37 crc kubenswrapper[4752]: I0122 10:37:37.340522 4752 scope.go:117] "RemoveContainer" containerID="43d45d54eaf1451800296d1859794b3a5a84fb2b4bf6984616e2abbed01b0a31" Jan 22 10:37:37 crc kubenswrapper[4752]: I0122 10:37:37.384735 4752 scope.go:117] "RemoveContainer" containerID="43d45d54eaf1451800296d1859794b3a5a84fb2b4bf6984616e2abbed01b0a31" Jan 22 10:37:37 crc kubenswrapper[4752]: I0122 10:37:37.384985 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-jh6kp"] Jan 22 10:37:37 crc kubenswrapper[4752]: E0122 10:37:37.385964 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43d45d54eaf1451800296d1859794b3a5a84fb2b4bf6984616e2abbed01b0a31\": container with ID starting with 43d45d54eaf1451800296d1859794b3a5a84fb2b4bf6984616e2abbed01b0a31 not found: ID does not exist" containerID="43d45d54eaf1451800296d1859794b3a5a84fb2b4bf6984616e2abbed01b0a31" Jan 22 10:37:37 crc kubenswrapper[4752]: I0122 10:37:37.385996 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43d45d54eaf1451800296d1859794b3a5a84fb2b4bf6984616e2abbed01b0a31"} err="failed to get container status \"43d45d54eaf1451800296d1859794b3a5a84fb2b4bf6984616e2abbed01b0a31\": rpc error: code = NotFound desc = could not find container \"43d45d54eaf1451800296d1859794b3a5a84fb2b4bf6984616e2abbed01b0a31\": container with ID starting with 43d45d54eaf1451800296d1859794b3a5a84fb2b4bf6984616e2abbed01b0a31 not found: ID does not exist" Jan 22 10:37:37 crc kubenswrapper[4752]: I0122 10:37:37.388589 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-jh6kp"] Jan 22 10:37:37 crc kubenswrapper[4752]: I0122 10:37:37.952781 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tblcw"] Jan 22 10:37:37 crc kubenswrapper[4752]: E0122 10:37:37.953286 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b82cc492-857e-4eaf-8e18-87e830bdc9f6" containerName="console" Jan 22 10:37:37 crc kubenswrapper[4752]: I0122 10:37:37.953316 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="b82cc492-857e-4eaf-8e18-87e830bdc9f6" containerName="console" Jan 22 10:37:37 crc kubenswrapper[4752]: I0122 10:37:37.953580 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="b82cc492-857e-4eaf-8e18-87e830bdc9f6" containerName="console" Jan 22 10:37:37 crc kubenswrapper[4752]: I0122 10:37:37.955370 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tblcw" Jan 22 10:37:37 crc kubenswrapper[4752]: I0122 10:37:37.957658 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tblcw"] Jan 22 10:37:37 crc kubenswrapper[4752]: I0122 10:37:37.984666 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58f5h\" (UniqueName: \"kubernetes.io/projected/b7a2a36c-e2a1-41ed-94ed-4ef39755530c-kube-api-access-58f5h\") pod \"redhat-operators-tblcw\" (UID: \"b7a2a36c-e2a1-41ed-94ed-4ef39755530c\") " pod="openshift-marketplace/redhat-operators-tblcw" Jan 22 10:37:37 crc kubenswrapper[4752]: I0122 10:37:37.984756 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7a2a36c-e2a1-41ed-94ed-4ef39755530c-catalog-content\") pod \"redhat-operators-tblcw\" (UID: \"b7a2a36c-e2a1-41ed-94ed-4ef39755530c\") " pod="openshift-marketplace/redhat-operators-tblcw" Jan 22 10:37:37 crc kubenswrapper[4752]: I0122 10:37:37.985012 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7a2a36c-e2a1-41ed-94ed-4ef39755530c-utilities\") pod \"redhat-operators-tblcw\" (UID: \"b7a2a36c-e2a1-41ed-94ed-4ef39755530c\") " pod="openshift-marketplace/redhat-operators-tblcw" Jan 22 10:37:38 crc kubenswrapper[4752]: I0122 10:37:38.086754 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7a2a36c-e2a1-41ed-94ed-4ef39755530c-catalog-content\") pod \"redhat-operators-tblcw\" (UID: \"b7a2a36c-e2a1-41ed-94ed-4ef39755530c\") " pod="openshift-marketplace/redhat-operators-tblcw" Jan 22 10:37:38 crc kubenswrapper[4752]: I0122 10:37:38.087479 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7a2a36c-e2a1-41ed-94ed-4ef39755530c-catalog-content\") pod \"redhat-operators-tblcw\" (UID: \"b7a2a36c-e2a1-41ed-94ed-4ef39755530c\") " pod="openshift-marketplace/redhat-operators-tblcw" Jan 22 10:37:38 crc kubenswrapper[4752]: I0122 10:37:38.087496 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7a2a36c-e2a1-41ed-94ed-4ef39755530c-utilities\") pod \"redhat-operators-tblcw\" (UID: \"b7a2a36c-e2a1-41ed-94ed-4ef39755530c\") " pod="openshift-marketplace/redhat-operators-tblcw" Jan 22 10:37:38 crc kubenswrapper[4752]: I0122 10:37:38.087794 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58f5h\" (UniqueName: \"kubernetes.io/projected/b7a2a36c-e2a1-41ed-94ed-4ef39755530c-kube-api-access-58f5h\") pod \"redhat-operators-tblcw\" (UID: \"b7a2a36c-e2a1-41ed-94ed-4ef39755530c\") " pod="openshift-marketplace/redhat-operators-tblcw" Jan 22 10:37:38 crc kubenswrapper[4752]: I0122 10:37:38.087832 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7a2a36c-e2a1-41ed-94ed-4ef39755530c-utilities\") pod \"redhat-operators-tblcw\" (UID: \"b7a2a36c-e2a1-41ed-94ed-4ef39755530c\") " pod="openshift-marketplace/redhat-operators-tblcw" Jan 22 10:37:38 crc kubenswrapper[4752]: I0122 10:37:38.107593 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58f5h\" (UniqueName: \"kubernetes.io/projected/b7a2a36c-e2a1-41ed-94ed-4ef39755530c-kube-api-access-58f5h\") pod \"redhat-operators-tblcw\" (UID: \"b7a2a36c-e2a1-41ed-94ed-4ef39755530c\") " pod="openshift-marketplace/redhat-operators-tblcw" Jan 22 10:37:38 crc kubenswrapper[4752]: I0122 10:37:38.301835 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tblcw" Jan 22 10:37:38 crc kubenswrapper[4752]: I0122 10:37:38.519678 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tblcw"] Jan 22 10:37:39 crc kubenswrapper[4752]: I0122 10:37:39.114586 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b82cc492-857e-4eaf-8e18-87e830bdc9f6" path="/var/lib/kubelet/pods/b82cc492-857e-4eaf-8e18-87e830bdc9f6/volumes" Jan 22 10:37:39 crc kubenswrapper[4752]: I0122 10:37:39.354772 4752 generic.go:334] "Generic (PLEG): container finished" podID="dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8" containerID="56a01eaefc4a031132701a611f3b5b03a3d362d3fa02f2343c2cc30d87a9436f" exitCode=0 Jan 22 10:37:39 crc kubenswrapper[4752]: I0122 10:37:39.354847 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6tb7m" event={"ID":"dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8","Type":"ContainerDied","Data":"56a01eaefc4a031132701a611f3b5b03a3d362d3fa02f2343c2cc30d87a9436f"} Jan 22 10:37:39 crc kubenswrapper[4752]: I0122 10:37:39.356296 4752 generic.go:334] "Generic (PLEG): container finished" podID="b7a2a36c-e2a1-41ed-94ed-4ef39755530c" containerID="bea5b7058129935b884f7a3ebc4282eda2859d41ab83afe68e9e9e06693977a9" exitCode=0 Jan 22 10:37:39 crc kubenswrapper[4752]: I0122 10:37:39.356328 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tblcw" event={"ID":"b7a2a36c-e2a1-41ed-94ed-4ef39755530c","Type":"ContainerDied","Data":"bea5b7058129935b884f7a3ebc4282eda2859d41ab83afe68e9e9e06693977a9"} Jan 22 10:37:39 crc kubenswrapper[4752]: I0122 10:37:39.356344 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tblcw" event={"ID":"b7a2a36c-e2a1-41ed-94ed-4ef39755530c","Type":"ContainerStarted","Data":"ea827fcc53a2bf969d6762fc733b5eb8bba6f501f86ae10c6360832fc702d1cd"} Jan 22 10:37:40 crc kubenswrapper[4752]: I0122 10:37:40.366003 4752 generic.go:334] "Generic (PLEG): container finished" podID="dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8" containerID="cd00f0a7b77ad0fc50dc59bf6229ed4c164ecee1f40079171b0c169357509a5a" exitCode=0 Jan 22 10:37:40 crc kubenswrapper[4752]: I0122 10:37:40.366093 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6tb7m" event={"ID":"dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8","Type":"ContainerDied","Data":"cd00f0a7b77ad0fc50dc59bf6229ed4c164ecee1f40079171b0c169357509a5a"} Jan 22 10:37:41 crc kubenswrapper[4752]: I0122 10:37:41.378155 4752 generic.go:334] "Generic (PLEG): container finished" podID="b7a2a36c-e2a1-41ed-94ed-4ef39755530c" containerID="377c3b1c9e5c28f8cd1cd7f9098afc51e87e89606f877a90396597dd071e4b34" exitCode=0 Jan 22 10:37:41 crc kubenswrapper[4752]: I0122 10:37:41.378333 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tblcw" event={"ID":"b7a2a36c-e2a1-41ed-94ed-4ef39755530c","Type":"ContainerDied","Data":"377c3b1c9e5c28f8cd1cd7f9098afc51e87e89606f877a90396597dd071e4b34"} Jan 22 10:37:41 crc kubenswrapper[4752]: I0122 10:37:41.649189 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6tb7m" Jan 22 10:37:41 crc kubenswrapper[4752]: I0122 10:37:41.743990 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n25mh\" (UniqueName: \"kubernetes.io/projected/dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8-kube-api-access-n25mh\") pod \"dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8\" (UID: \"dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8\") " Jan 22 10:37:41 crc kubenswrapper[4752]: I0122 10:37:41.744114 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8-util\") pod \"dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8\" (UID: \"dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8\") " Jan 22 10:37:41 crc kubenswrapper[4752]: I0122 10:37:41.744141 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8-bundle\") pod \"dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8\" (UID: \"dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8\") " Jan 22 10:37:41 crc kubenswrapper[4752]: I0122 10:37:41.751888 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8-bundle" (OuterVolumeSpecName: "bundle") pod "dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8" (UID: "dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:37:41 crc kubenswrapper[4752]: I0122 10:37:41.761047 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8-kube-api-access-n25mh" (OuterVolumeSpecName: "kube-api-access-n25mh") pod "dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8" (UID: "dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8"). InnerVolumeSpecName "kube-api-access-n25mh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:41 crc kubenswrapper[4752]: I0122 10:37:41.845585 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n25mh\" (UniqueName: \"kubernetes.io/projected/dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8-kube-api-access-n25mh\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:41 crc kubenswrapper[4752]: I0122 10:37:41.845620 4752 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:42 crc kubenswrapper[4752]: I0122 10:37:42.257223 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8-util" (OuterVolumeSpecName: "util") pod "dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8" (UID: "dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:37:42 crc kubenswrapper[4752]: I0122 10:37:42.352293 4752 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8-util\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:42 crc kubenswrapper[4752]: I0122 10:37:42.387953 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6tb7m" event={"ID":"dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8","Type":"ContainerDied","Data":"fa6f591e2fa49d2aeb94d7a705c5599cf575417a5b81fbb33d0e4b2eeece2604"} Jan 22 10:37:42 crc kubenswrapper[4752]: I0122 10:37:42.388016 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa6f591e2fa49d2aeb94d7a705c5599cf575417a5b81fbb33d0e4b2eeece2604" Jan 22 10:37:42 crc kubenswrapper[4752]: I0122 10:37:42.388046 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6tb7m" Jan 22 10:37:43 crc kubenswrapper[4752]: I0122 10:37:43.396467 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tblcw" event={"ID":"b7a2a36c-e2a1-41ed-94ed-4ef39755530c","Type":"ContainerStarted","Data":"e658a5623320303155604d8bfb557aaad97a35df0505478a9ea045574a5fc6a5"} Jan 22 10:37:43 crc kubenswrapper[4752]: I0122 10:37:43.414268 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tblcw" podStartSLOduration=3.523834244 podStartE2EDuration="6.41424951s" podCreationTimestamp="2026-01-22 10:37:37 +0000 UTC" firstStartedPulling="2026-01-22 10:37:39.357346453 +0000 UTC m=+738.587289361" lastFinishedPulling="2026-01-22 10:37:42.247761719 +0000 UTC m=+741.477704627" observedRunningTime="2026-01-22 10:37:43.412959025 +0000 UTC m=+742.642901933" watchObservedRunningTime="2026-01-22 10:37:43.41424951 +0000 UTC m=+742.644192418" Jan 22 10:37:48 crc kubenswrapper[4752]: I0122 10:37:48.302291 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tblcw" Jan 22 10:37:48 crc kubenswrapper[4752]: I0122 10:37:48.302729 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tblcw" Jan 22 10:37:48 crc kubenswrapper[4752]: I0122 10:37:48.344308 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tblcw" Jan 22 10:37:48 crc kubenswrapper[4752]: I0122 10:37:48.472671 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tblcw" Jan 22 10:37:50 crc kubenswrapper[4752]: I0122 10:37:50.760673 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tblcw"] Jan 22 10:37:50 crc kubenswrapper[4752]: I0122 10:37:50.761430 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tblcw" podUID="b7a2a36c-e2a1-41ed-94ed-4ef39755530c" containerName="registry-server" containerID="cri-o://e658a5623320303155604d8bfb557aaad97a35df0505478a9ea045574a5fc6a5" gracePeriod=2 Jan 22 10:37:50 crc kubenswrapper[4752]: E0122 10:37:50.906065 4752 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7a2a36c_e2a1_41ed_94ed_4ef39755530c.slice/crio-e658a5623320303155604d8bfb557aaad97a35df0505478a9ea045574a5fc6a5.scope\": RecentStats: unable to find data in memory cache]" Jan 22 10:37:51 crc kubenswrapper[4752]: I0122 10:37:51.166614 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tblcw" Jan 22 10:37:51 crc kubenswrapper[4752]: I0122 10:37:51.276196 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7a2a36c-e2a1-41ed-94ed-4ef39755530c-catalog-content\") pod \"b7a2a36c-e2a1-41ed-94ed-4ef39755530c\" (UID: \"b7a2a36c-e2a1-41ed-94ed-4ef39755530c\") " Jan 22 10:37:51 crc kubenswrapper[4752]: I0122 10:37:51.276306 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58f5h\" (UniqueName: \"kubernetes.io/projected/b7a2a36c-e2a1-41ed-94ed-4ef39755530c-kube-api-access-58f5h\") pod \"b7a2a36c-e2a1-41ed-94ed-4ef39755530c\" (UID: \"b7a2a36c-e2a1-41ed-94ed-4ef39755530c\") " Jan 22 10:37:51 crc kubenswrapper[4752]: I0122 10:37:51.276350 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7a2a36c-e2a1-41ed-94ed-4ef39755530c-utilities\") pod \"b7a2a36c-e2a1-41ed-94ed-4ef39755530c\" (UID: \"b7a2a36c-e2a1-41ed-94ed-4ef39755530c\") " Jan 22 10:37:51 crc kubenswrapper[4752]: I0122 10:37:51.277354 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7a2a36c-e2a1-41ed-94ed-4ef39755530c-utilities" (OuterVolumeSpecName: "utilities") pod "b7a2a36c-e2a1-41ed-94ed-4ef39755530c" (UID: "b7a2a36c-e2a1-41ed-94ed-4ef39755530c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:37:51 crc kubenswrapper[4752]: I0122 10:37:51.295115 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7a2a36c-e2a1-41ed-94ed-4ef39755530c-kube-api-access-58f5h" (OuterVolumeSpecName: "kube-api-access-58f5h") pod "b7a2a36c-e2a1-41ed-94ed-4ef39755530c" (UID: "b7a2a36c-e2a1-41ed-94ed-4ef39755530c"). InnerVolumeSpecName "kube-api-access-58f5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:37:51 crc kubenswrapper[4752]: I0122 10:37:51.378140 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7a2a36c-e2a1-41ed-94ed-4ef39755530c-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:51 crc kubenswrapper[4752]: I0122 10:37:51.378167 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58f5h\" (UniqueName: \"kubernetes.io/projected/b7a2a36c-e2a1-41ed-94ed-4ef39755530c-kube-api-access-58f5h\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:51 crc kubenswrapper[4752]: I0122 10:37:51.447796 4752 generic.go:334] "Generic (PLEG): container finished" podID="b7a2a36c-e2a1-41ed-94ed-4ef39755530c" containerID="e658a5623320303155604d8bfb557aaad97a35df0505478a9ea045574a5fc6a5" exitCode=0 Jan 22 10:37:51 crc kubenswrapper[4752]: I0122 10:37:51.447842 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tblcw" event={"ID":"b7a2a36c-e2a1-41ed-94ed-4ef39755530c","Type":"ContainerDied","Data":"e658a5623320303155604d8bfb557aaad97a35df0505478a9ea045574a5fc6a5"} Jan 22 10:37:51 crc kubenswrapper[4752]: I0122 10:37:51.447893 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tblcw" event={"ID":"b7a2a36c-e2a1-41ed-94ed-4ef39755530c","Type":"ContainerDied","Data":"ea827fcc53a2bf969d6762fc733b5eb8bba6f501f86ae10c6360832fc702d1cd"} Jan 22 10:37:51 crc kubenswrapper[4752]: I0122 10:37:51.447913 4752 scope.go:117] "RemoveContainer" containerID="e658a5623320303155604d8bfb557aaad97a35df0505478a9ea045574a5fc6a5" Jan 22 10:37:51 crc kubenswrapper[4752]: I0122 10:37:51.448044 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tblcw" Jan 22 10:37:51 crc kubenswrapper[4752]: I0122 10:37:51.464028 4752 scope.go:117] "RemoveContainer" containerID="377c3b1c9e5c28f8cd1cd7f9098afc51e87e89606f877a90396597dd071e4b34" Jan 22 10:37:51 crc kubenswrapper[4752]: I0122 10:37:51.477629 4752 scope.go:117] "RemoveContainer" containerID="bea5b7058129935b884f7a3ebc4282eda2859d41ab83afe68e9e9e06693977a9" Jan 22 10:37:51 crc kubenswrapper[4752]: I0122 10:37:51.494397 4752 scope.go:117] "RemoveContainer" containerID="e658a5623320303155604d8bfb557aaad97a35df0505478a9ea045574a5fc6a5" Jan 22 10:37:51 crc kubenswrapper[4752]: E0122 10:37:51.495614 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e658a5623320303155604d8bfb557aaad97a35df0505478a9ea045574a5fc6a5\": container with ID starting with e658a5623320303155604d8bfb557aaad97a35df0505478a9ea045574a5fc6a5 not found: ID does not exist" containerID="e658a5623320303155604d8bfb557aaad97a35df0505478a9ea045574a5fc6a5" Jan 22 10:37:51 crc kubenswrapper[4752]: I0122 10:37:51.495660 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e658a5623320303155604d8bfb557aaad97a35df0505478a9ea045574a5fc6a5"} err="failed to get container status \"e658a5623320303155604d8bfb557aaad97a35df0505478a9ea045574a5fc6a5\": rpc error: code = NotFound desc = could not find container \"e658a5623320303155604d8bfb557aaad97a35df0505478a9ea045574a5fc6a5\": container with ID starting with e658a5623320303155604d8bfb557aaad97a35df0505478a9ea045574a5fc6a5 not found: ID does not exist" Jan 22 10:37:51 crc kubenswrapper[4752]: I0122 10:37:51.495684 4752 scope.go:117] "RemoveContainer" containerID="377c3b1c9e5c28f8cd1cd7f9098afc51e87e89606f877a90396597dd071e4b34" Jan 22 10:37:51 crc kubenswrapper[4752]: E0122 10:37:51.496135 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"377c3b1c9e5c28f8cd1cd7f9098afc51e87e89606f877a90396597dd071e4b34\": container with ID starting with 377c3b1c9e5c28f8cd1cd7f9098afc51e87e89606f877a90396597dd071e4b34 not found: ID does not exist" containerID="377c3b1c9e5c28f8cd1cd7f9098afc51e87e89606f877a90396597dd071e4b34" Jan 22 10:37:51 crc kubenswrapper[4752]: I0122 10:37:51.496166 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"377c3b1c9e5c28f8cd1cd7f9098afc51e87e89606f877a90396597dd071e4b34"} err="failed to get container status \"377c3b1c9e5c28f8cd1cd7f9098afc51e87e89606f877a90396597dd071e4b34\": rpc error: code = NotFound desc = could not find container \"377c3b1c9e5c28f8cd1cd7f9098afc51e87e89606f877a90396597dd071e4b34\": container with ID starting with 377c3b1c9e5c28f8cd1cd7f9098afc51e87e89606f877a90396597dd071e4b34 not found: ID does not exist" Jan 22 10:37:51 crc kubenswrapper[4752]: I0122 10:37:51.496187 4752 scope.go:117] "RemoveContainer" containerID="bea5b7058129935b884f7a3ebc4282eda2859d41ab83afe68e9e9e06693977a9" Jan 22 10:37:51 crc kubenswrapper[4752]: E0122 10:37:51.496501 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bea5b7058129935b884f7a3ebc4282eda2859d41ab83afe68e9e9e06693977a9\": container with ID starting with bea5b7058129935b884f7a3ebc4282eda2859d41ab83afe68e9e9e06693977a9 not found: ID does not exist" containerID="bea5b7058129935b884f7a3ebc4282eda2859d41ab83afe68e9e9e06693977a9" Jan 22 10:37:51 crc kubenswrapper[4752]: I0122 10:37:51.496524 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bea5b7058129935b884f7a3ebc4282eda2859d41ab83afe68e9e9e06693977a9"} err="failed to get container status \"bea5b7058129935b884f7a3ebc4282eda2859d41ab83afe68e9e9e06693977a9\": rpc error: code = NotFound desc = could not find container \"bea5b7058129935b884f7a3ebc4282eda2859d41ab83afe68e9e9e06693977a9\": container with ID starting with bea5b7058129935b884f7a3ebc4282eda2859d41ab83afe68e9e9e06693977a9 not found: ID does not exist" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.191772 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-84555878f8-dt2nr"] Jan 22 10:37:52 crc kubenswrapper[4752]: E0122 10:37:52.192237 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8" containerName="pull" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.192248 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8" containerName="pull" Jan 22 10:37:52 crc kubenswrapper[4752]: E0122 10:37:52.192258 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7a2a36c-e2a1-41ed-94ed-4ef39755530c" containerName="extract-content" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.192264 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7a2a36c-e2a1-41ed-94ed-4ef39755530c" containerName="extract-content" Jan 22 10:37:52 crc kubenswrapper[4752]: E0122 10:37:52.192273 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8" containerName="extract" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.192279 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8" containerName="extract" Jan 22 10:37:52 crc kubenswrapper[4752]: E0122 10:37:52.192292 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8" containerName="util" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.192298 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8" containerName="util" Jan 22 10:37:52 crc kubenswrapper[4752]: E0122 10:37:52.192305 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7a2a36c-e2a1-41ed-94ed-4ef39755530c" containerName="registry-server" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.192310 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7a2a36c-e2a1-41ed-94ed-4ef39755530c" containerName="registry-server" Jan 22 10:37:52 crc kubenswrapper[4752]: E0122 10:37:52.192324 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7a2a36c-e2a1-41ed-94ed-4ef39755530c" containerName="extract-utilities" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.192330 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7a2a36c-e2a1-41ed-94ed-4ef39755530c" containerName="extract-utilities" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.192426 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7a2a36c-e2a1-41ed-94ed-4ef39755530c" containerName="registry-server" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.192435 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcb7940e-6f07-4f5d-a08f-c88b4f5e98a8" containerName="extract" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.192829 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-84555878f8-dt2nr" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.198127 4752 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-fkhg4" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.198153 4752 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.198262 4752 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.198581 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.199554 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.225072 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-84555878f8-dt2nr"] Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.289821 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lmr4\" (UniqueName: \"kubernetes.io/projected/26a813c6-6bb2-4b5f-b67c-ed2cc30ced0f-kube-api-access-7lmr4\") pod \"metallb-operator-controller-manager-84555878f8-dt2nr\" (UID: \"26a813c6-6bb2-4b5f-b67c-ed2cc30ced0f\") " pod="metallb-system/metallb-operator-controller-manager-84555878f8-dt2nr" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.289890 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/26a813c6-6bb2-4b5f-b67c-ed2cc30ced0f-apiservice-cert\") pod \"metallb-operator-controller-manager-84555878f8-dt2nr\" (UID: \"26a813c6-6bb2-4b5f-b67c-ed2cc30ced0f\") " pod="metallb-system/metallb-operator-controller-manager-84555878f8-dt2nr" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.289928 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/26a813c6-6bb2-4b5f-b67c-ed2cc30ced0f-webhook-cert\") pod \"metallb-operator-controller-manager-84555878f8-dt2nr\" (UID: \"26a813c6-6bb2-4b5f-b67c-ed2cc30ced0f\") " pod="metallb-system/metallb-operator-controller-manager-84555878f8-dt2nr" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.391570 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lmr4\" (UniqueName: \"kubernetes.io/projected/26a813c6-6bb2-4b5f-b67c-ed2cc30ced0f-kube-api-access-7lmr4\") pod \"metallb-operator-controller-manager-84555878f8-dt2nr\" (UID: \"26a813c6-6bb2-4b5f-b67c-ed2cc30ced0f\") " pod="metallb-system/metallb-operator-controller-manager-84555878f8-dt2nr" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.391658 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/26a813c6-6bb2-4b5f-b67c-ed2cc30ced0f-apiservice-cert\") pod \"metallb-operator-controller-manager-84555878f8-dt2nr\" (UID: \"26a813c6-6bb2-4b5f-b67c-ed2cc30ced0f\") " pod="metallb-system/metallb-operator-controller-manager-84555878f8-dt2nr" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.391715 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/26a813c6-6bb2-4b5f-b67c-ed2cc30ced0f-webhook-cert\") pod \"metallb-operator-controller-manager-84555878f8-dt2nr\" (UID: \"26a813c6-6bb2-4b5f-b67c-ed2cc30ced0f\") " pod="metallb-system/metallb-operator-controller-manager-84555878f8-dt2nr" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.397393 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/26a813c6-6bb2-4b5f-b67c-ed2cc30ced0f-webhook-cert\") pod \"metallb-operator-controller-manager-84555878f8-dt2nr\" (UID: \"26a813c6-6bb2-4b5f-b67c-ed2cc30ced0f\") " pod="metallb-system/metallb-operator-controller-manager-84555878f8-dt2nr" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.410453 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lmr4\" (UniqueName: \"kubernetes.io/projected/26a813c6-6bb2-4b5f-b67c-ed2cc30ced0f-kube-api-access-7lmr4\") pod \"metallb-operator-controller-manager-84555878f8-dt2nr\" (UID: \"26a813c6-6bb2-4b5f-b67c-ed2cc30ced0f\") " pod="metallb-system/metallb-operator-controller-manager-84555878f8-dt2nr" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.410823 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/26a813c6-6bb2-4b5f-b67c-ed2cc30ced0f-apiservice-cert\") pod \"metallb-operator-controller-manager-84555878f8-dt2nr\" (UID: \"26a813c6-6bb2-4b5f-b67c-ed2cc30ced0f\") " pod="metallb-system/metallb-operator-controller-manager-84555878f8-dt2nr" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.462716 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7f586c5cc5-9vkb6"] Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.463844 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7f586c5cc5-9vkb6" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.476695 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7f586c5cc5-9vkb6"] Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.480351 4752 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.480399 4752 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.480552 4752 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-tltxg" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.514203 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-84555878f8-dt2nr" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.548994 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7a2a36c-e2a1-41ed-94ed-4ef39755530c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b7a2a36c-e2a1-41ed-94ed-4ef39755530c" (UID: "b7a2a36c-e2a1-41ed-94ed-4ef39755530c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.595170 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr6z9\" (UniqueName: \"kubernetes.io/projected/623835d1-e922-4a57-ab28-96c633496ff0-kube-api-access-zr6z9\") pod \"metallb-operator-webhook-server-7f586c5cc5-9vkb6\" (UID: \"623835d1-e922-4a57-ab28-96c633496ff0\") " pod="metallb-system/metallb-operator-webhook-server-7f586c5cc5-9vkb6" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.595258 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/623835d1-e922-4a57-ab28-96c633496ff0-webhook-cert\") pod \"metallb-operator-webhook-server-7f586c5cc5-9vkb6\" (UID: \"623835d1-e922-4a57-ab28-96c633496ff0\") " pod="metallb-system/metallb-operator-webhook-server-7f586c5cc5-9vkb6" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.595345 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/623835d1-e922-4a57-ab28-96c633496ff0-apiservice-cert\") pod \"metallb-operator-webhook-server-7f586c5cc5-9vkb6\" (UID: \"623835d1-e922-4a57-ab28-96c633496ff0\") " pod="metallb-system/metallb-operator-webhook-server-7f586c5cc5-9vkb6" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.595398 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7a2a36c-e2a1-41ed-94ed-4ef39755530c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.683478 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tblcw"] Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.688583 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tblcw"] Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.697034 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/623835d1-e922-4a57-ab28-96c633496ff0-apiservice-cert\") pod \"metallb-operator-webhook-server-7f586c5cc5-9vkb6\" (UID: \"623835d1-e922-4a57-ab28-96c633496ff0\") " pod="metallb-system/metallb-operator-webhook-server-7f586c5cc5-9vkb6" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.697086 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr6z9\" (UniqueName: \"kubernetes.io/projected/623835d1-e922-4a57-ab28-96c633496ff0-kube-api-access-zr6z9\") pod \"metallb-operator-webhook-server-7f586c5cc5-9vkb6\" (UID: \"623835d1-e922-4a57-ab28-96c633496ff0\") " pod="metallb-system/metallb-operator-webhook-server-7f586c5cc5-9vkb6" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.697135 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/623835d1-e922-4a57-ab28-96c633496ff0-webhook-cert\") pod \"metallb-operator-webhook-server-7f586c5cc5-9vkb6\" (UID: \"623835d1-e922-4a57-ab28-96c633496ff0\") " pod="metallb-system/metallb-operator-webhook-server-7f586c5cc5-9vkb6" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.702367 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/623835d1-e922-4a57-ab28-96c633496ff0-webhook-cert\") pod \"metallb-operator-webhook-server-7f586c5cc5-9vkb6\" (UID: \"623835d1-e922-4a57-ab28-96c633496ff0\") " pod="metallb-system/metallb-operator-webhook-server-7f586c5cc5-9vkb6" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.705103 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/623835d1-e922-4a57-ab28-96c633496ff0-apiservice-cert\") pod \"metallb-operator-webhook-server-7f586c5cc5-9vkb6\" (UID: \"623835d1-e922-4a57-ab28-96c633496ff0\") " pod="metallb-system/metallb-operator-webhook-server-7f586c5cc5-9vkb6" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.718158 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr6z9\" (UniqueName: \"kubernetes.io/projected/623835d1-e922-4a57-ab28-96c633496ff0-kube-api-access-zr6z9\") pod \"metallb-operator-webhook-server-7f586c5cc5-9vkb6\" (UID: \"623835d1-e922-4a57-ab28-96c633496ff0\") " pod="metallb-system/metallb-operator-webhook-server-7f586c5cc5-9vkb6" Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.767736 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-84555878f8-dt2nr"] Jan 22 10:37:52 crc kubenswrapper[4752]: W0122 10:37:52.772866 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26a813c6_6bb2_4b5f_b67c_ed2cc30ced0f.slice/crio-32af3855d6c65845d09a62f3f6f56c1a1b11277e5dfa03637cbef329752bd841 WatchSource:0}: Error finding container 32af3855d6c65845d09a62f3f6f56c1a1b11277e5dfa03637cbef329752bd841: Status 404 returned error can't find the container with id 32af3855d6c65845d09a62f3f6f56c1a1b11277e5dfa03637cbef329752bd841 Jan 22 10:37:52 crc kubenswrapper[4752]: I0122 10:37:52.779082 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7f586c5cc5-9vkb6" Jan 22 10:37:53 crc kubenswrapper[4752]: I0122 10:37:53.105199 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7a2a36c-e2a1-41ed-94ed-4ef39755530c" path="/var/lib/kubelet/pods/b7a2a36c-e2a1-41ed-94ed-4ef39755530c/volumes" Jan 22 10:37:53 crc kubenswrapper[4752]: I0122 10:37:53.279443 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7f586c5cc5-9vkb6"] Jan 22 10:37:53 crc kubenswrapper[4752]: I0122 10:37:53.463111 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7f586c5cc5-9vkb6" event={"ID":"623835d1-e922-4a57-ab28-96c633496ff0","Type":"ContainerStarted","Data":"47d9b1a4db07e0f07f6cd4161a50cc6d32008382d35d01abbab1d124e8695241"} Jan 22 10:37:53 crc kubenswrapper[4752]: I0122 10:37:53.464275 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-84555878f8-dt2nr" event={"ID":"26a813c6-6bb2-4b5f-b67c-ed2cc30ced0f","Type":"ContainerStarted","Data":"32af3855d6c65845d09a62f3f6f56c1a1b11277e5dfa03637cbef329752bd841"} Jan 22 10:38:00 crc kubenswrapper[4752]: I0122 10:38:00.507454 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-84555878f8-dt2nr" event={"ID":"26a813c6-6bb2-4b5f-b67c-ed2cc30ced0f","Type":"ContainerStarted","Data":"883a6392c0c288f57dca7c8112910a4d4cb8f60d98886ec37abf5b24ad5b585e"} Jan 22 10:38:00 crc kubenswrapper[4752]: I0122 10:38:00.508143 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-84555878f8-dt2nr" Jan 22 10:38:00 crc kubenswrapper[4752]: I0122 10:38:00.510518 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7f586c5cc5-9vkb6" event={"ID":"623835d1-e922-4a57-ab28-96c633496ff0","Type":"ContainerStarted","Data":"308d23872d20bbf33905285a42cc9a49f675de314795dae41802631aeee81b14"} Jan 22 10:38:00 crc kubenswrapper[4752]: I0122 10:38:00.510717 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7f586c5cc5-9vkb6" Jan 22 10:38:00 crc kubenswrapper[4752]: I0122 10:38:00.534948 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-84555878f8-dt2nr" podStartSLOduration=1.608362713 podStartE2EDuration="8.534933205s" podCreationTimestamp="2026-01-22 10:37:52 +0000 UTC" firstStartedPulling="2026-01-22 10:37:52.774925405 +0000 UTC m=+752.004868323" lastFinishedPulling="2026-01-22 10:37:59.701495907 +0000 UTC m=+758.931438815" observedRunningTime="2026-01-22 10:38:00.528247964 +0000 UTC m=+759.758190872" watchObservedRunningTime="2026-01-22 10:38:00.534933205 +0000 UTC m=+759.764876113" Jan 22 10:38:00 crc kubenswrapper[4752]: I0122 10:38:00.554131 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7f586c5cc5-9vkb6" podStartSLOduration=2.123079813 podStartE2EDuration="8.554113493s" podCreationTimestamp="2026-01-22 10:37:52 +0000 UTC" firstStartedPulling="2026-01-22 10:37:53.292636186 +0000 UTC m=+752.522579094" lastFinishedPulling="2026-01-22 10:37:59.723669866 +0000 UTC m=+758.953612774" observedRunningTime="2026-01-22 10:38:00.552360085 +0000 UTC m=+759.782303003" watchObservedRunningTime="2026-01-22 10:38:00.554113493 +0000 UTC m=+759.784056401" Jan 22 10:38:12 crc kubenswrapper[4752]: I0122 10:38:12.786557 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7f586c5cc5-9vkb6" Jan 22 10:38:27 crc kubenswrapper[4752]: I0122 10:38:27.723901 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:38:27 crc kubenswrapper[4752]: I0122 10:38:27.725624 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:38:32 crc kubenswrapper[4752]: I0122 10:38:32.518904 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-84555878f8-dt2nr" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.370894 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-d8nd5"] Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.373656 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-d8nd5" Jan 22 10:38:33 crc kubenswrapper[4752]: W0122 10:38:33.375802 4752 reflector.go:561] object-"metallb-system"/"frr-k8s-daemon-dockercfg-5s7d6": failed to list *v1.Secret: secrets "frr-k8s-daemon-dockercfg-5s7d6" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 22 10:38:33 crc kubenswrapper[4752]: E0122 10:38:33.375843 4752 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"frr-k8s-daemon-dockercfg-5s7d6\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"frr-k8s-daemon-dockercfg-5s7d6\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 10:38:33 crc kubenswrapper[4752]: W0122 10:38:33.375982 4752 reflector.go:561] object-"metallb-system"/"frr-startup": failed to list *v1.ConfigMap: configmaps "frr-startup" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 22 10:38:33 crc kubenswrapper[4752]: E0122 10:38:33.376031 4752 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"frr-startup\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"frr-startup\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 10:38:33 crc kubenswrapper[4752]: W0122 10:38:33.378239 4752 reflector.go:561] object-"metallb-system"/"frr-k8s-certs-secret": failed to list *v1.Secret: secrets "frr-k8s-certs-secret" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 22 10:38:33 crc kubenswrapper[4752]: E0122 10:38:33.378271 4752 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"frr-k8s-certs-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"frr-k8s-certs-secret\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.380426 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-nrxvf"] Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.381283 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nrxvf" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.382761 4752 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.405546 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-nrxvf"] Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.436095 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/08f8add3-b808-4eb1-a512-d955d30091ee-frr-startup\") pod \"frr-k8s-d8nd5\" (UID: \"08f8add3-b808-4eb1-a512-d955d30091ee\") " pod="metallb-system/frr-k8s-d8nd5" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.436145 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af83e0d3-5874-46e3-99be-f19e070e8ef9-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-nrxvf\" (UID: \"af83e0d3-5874-46e3-99be-f19e070e8ef9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nrxvf" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.436201 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/08f8add3-b808-4eb1-a512-d955d30091ee-reloader\") pod \"frr-k8s-d8nd5\" (UID: \"08f8add3-b808-4eb1-a512-d955d30091ee\") " pod="metallb-system/frr-k8s-d8nd5" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.436227 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbz8m\" (UniqueName: \"kubernetes.io/projected/08f8add3-b808-4eb1-a512-d955d30091ee-kube-api-access-lbz8m\") pod \"frr-k8s-d8nd5\" (UID: \"08f8add3-b808-4eb1-a512-d955d30091ee\") " pod="metallb-system/frr-k8s-d8nd5" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.436292 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08f8add3-b808-4eb1-a512-d955d30091ee-metrics-certs\") pod \"frr-k8s-d8nd5\" (UID: \"08f8add3-b808-4eb1-a512-d955d30091ee\") " pod="metallb-system/frr-k8s-d8nd5" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.436329 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/08f8add3-b808-4eb1-a512-d955d30091ee-frr-sockets\") pod \"frr-k8s-d8nd5\" (UID: \"08f8add3-b808-4eb1-a512-d955d30091ee\") " pod="metallb-system/frr-k8s-d8nd5" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.436352 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/08f8add3-b808-4eb1-a512-d955d30091ee-metrics\") pod \"frr-k8s-d8nd5\" (UID: \"08f8add3-b808-4eb1-a512-d955d30091ee\") " pod="metallb-system/frr-k8s-d8nd5" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.436383 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/08f8add3-b808-4eb1-a512-d955d30091ee-frr-conf\") pod \"frr-k8s-d8nd5\" (UID: \"08f8add3-b808-4eb1-a512-d955d30091ee\") " pod="metallb-system/frr-k8s-d8nd5" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.436423 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rblpr\" (UniqueName: \"kubernetes.io/projected/af83e0d3-5874-46e3-99be-f19e070e8ef9-kube-api-access-rblpr\") pod \"frr-k8s-webhook-server-7df86c4f6c-nrxvf\" (UID: \"af83e0d3-5874-46e3-99be-f19e070e8ef9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nrxvf" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.460145 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-qmqmm"] Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.461075 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-qmqmm" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.463061 4752 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.463121 4752 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-967q9" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.463214 4752 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.463244 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.467967 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-n5x89"] Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.469112 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-n5x89" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.473793 4752 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.489065 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-n5x89"] Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.537643 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/62c067e3-37a6-4b48-bb7c-d94b46740ba9-metrics-certs\") pod \"controller-6968d8fdc4-n5x89\" (UID: \"62c067e3-37a6-4b48-bb7c-d94b46740ba9\") " pod="metallb-system/controller-6968d8fdc4-n5x89" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.537692 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/08f8add3-b808-4eb1-a512-d955d30091ee-reloader\") pod \"frr-k8s-d8nd5\" (UID: \"08f8add3-b808-4eb1-a512-d955d30091ee\") " pod="metallb-system/frr-k8s-d8nd5" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.537714 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbz8m\" (UniqueName: \"kubernetes.io/projected/08f8add3-b808-4eb1-a512-d955d30091ee-kube-api-access-lbz8m\") pod \"frr-k8s-d8nd5\" (UID: \"08f8add3-b808-4eb1-a512-d955d30091ee\") " pod="metallb-system/frr-k8s-d8nd5" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.537744 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdztc\" (UniqueName: \"kubernetes.io/projected/55f3dbda-02f8-4612-8c4a-94e97e23f42a-kube-api-access-vdztc\") pod \"speaker-qmqmm\" (UID: \"55f3dbda-02f8-4612-8c4a-94e97e23f42a\") " pod="metallb-system/speaker-qmqmm" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.537764 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/55f3dbda-02f8-4612-8c4a-94e97e23f42a-metrics-certs\") pod \"speaker-qmqmm\" (UID: \"55f3dbda-02f8-4612-8c4a-94e97e23f42a\") " pod="metallb-system/speaker-qmqmm" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.537781 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08f8add3-b808-4eb1-a512-d955d30091ee-metrics-certs\") pod \"frr-k8s-d8nd5\" (UID: \"08f8add3-b808-4eb1-a512-d955d30091ee\") " pod="metallb-system/frr-k8s-d8nd5" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.537800 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/08f8add3-b808-4eb1-a512-d955d30091ee-frr-sockets\") pod \"frr-k8s-d8nd5\" (UID: \"08f8add3-b808-4eb1-a512-d955d30091ee\") " pod="metallb-system/frr-k8s-d8nd5" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.537820 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/08f8add3-b808-4eb1-a512-d955d30091ee-metrics\") pod \"frr-k8s-d8nd5\" (UID: \"08f8add3-b808-4eb1-a512-d955d30091ee\") " pod="metallb-system/frr-k8s-d8nd5" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.537894 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/08f8add3-b808-4eb1-a512-d955d30091ee-frr-conf\") pod \"frr-k8s-d8nd5\" (UID: \"08f8add3-b808-4eb1-a512-d955d30091ee\") " pod="metallb-system/frr-k8s-d8nd5" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.537964 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rblpr\" (UniqueName: \"kubernetes.io/projected/af83e0d3-5874-46e3-99be-f19e070e8ef9-kube-api-access-rblpr\") pod \"frr-k8s-webhook-server-7df86c4f6c-nrxvf\" (UID: \"af83e0d3-5874-46e3-99be-f19e070e8ef9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nrxvf" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.537991 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/55f3dbda-02f8-4612-8c4a-94e97e23f42a-memberlist\") pod \"speaker-qmqmm\" (UID: \"55f3dbda-02f8-4612-8c4a-94e97e23f42a\") " pod="metallb-system/speaker-qmqmm" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.538018 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/62c067e3-37a6-4b48-bb7c-d94b46740ba9-cert\") pod \"controller-6968d8fdc4-n5x89\" (UID: \"62c067e3-37a6-4b48-bb7c-d94b46740ba9\") " pod="metallb-system/controller-6968d8fdc4-n5x89" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.538148 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/08f8add3-b808-4eb1-a512-d955d30091ee-frr-startup\") pod \"frr-k8s-d8nd5\" (UID: \"08f8add3-b808-4eb1-a512-d955d30091ee\") " pod="metallb-system/frr-k8s-d8nd5" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.538190 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af83e0d3-5874-46e3-99be-f19e070e8ef9-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-nrxvf\" (UID: \"af83e0d3-5874-46e3-99be-f19e070e8ef9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nrxvf" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.538224 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/55f3dbda-02f8-4612-8c4a-94e97e23f42a-metallb-excludel2\") pod \"speaker-qmqmm\" (UID: \"55f3dbda-02f8-4612-8c4a-94e97e23f42a\") " pod="metallb-system/speaker-qmqmm" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.538267 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/08f8add3-b808-4eb1-a512-d955d30091ee-metrics\") pod \"frr-k8s-d8nd5\" (UID: \"08f8add3-b808-4eb1-a512-d955d30091ee\") " pod="metallb-system/frr-k8s-d8nd5" Jan 22 10:38:33 crc kubenswrapper[4752]: E0122 10:38:33.538318 4752 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.538325 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78z87\" (UniqueName: \"kubernetes.io/projected/62c067e3-37a6-4b48-bb7c-d94b46740ba9-kube-api-access-78z87\") pod \"controller-6968d8fdc4-n5x89\" (UID: \"62c067e3-37a6-4b48-bb7c-d94b46740ba9\") " pod="metallb-system/controller-6968d8fdc4-n5x89" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.538391 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/08f8add3-b808-4eb1-a512-d955d30091ee-frr-sockets\") pod \"frr-k8s-d8nd5\" (UID: \"08f8add3-b808-4eb1-a512-d955d30091ee\") " pod="metallb-system/frr-k8s-d8nd5" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.538417 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/08f8add3-b808-4eb1-a512-d955d30091ee-frr-conf\") pod \"frr-k8s-d8nd5\" (UID: \"08f8add3-b808-4eb1-a512-d955d30091ee\") " pod="metallb-system/frr-k8s-d8nd5" Jan 22 10:38:33 crc kubenswrapper[4752]: E0122 10:38:33.538407 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af83e0d3-5874-46e3-99be-f19e070e8ef9-cert podName:af83e0d3-5874-46e3-99be-f19e070e8ef9 nodeName:}" failed. No retries permitted until 2026-01-22 10:38:34.038377363 +0000 UTC m=+793.268320351 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/af83e0d3-5874-46e3-99be-f19e070e8ef9-cert") pod "frr-k8s-webhook-server-7df86c4f6c-nrxvf" (UID: "af83e0d3-5874-46e3-99be-f19e070e8ef9") : secret "frr-k8s-webhook-server-cert" not found Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.538718 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/08f8add3-b808-4eb1-a512-d955d30091ee-reloader\") pod \"frr-k8s-d8nd5\" (UID: \"08f8add3-b808-4eb1-a512-d955d30091ee\") " pod="metallb-system/frr-k8s-d8nd5" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.556164 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rblpr\" (UniqueName: \"kubernetes.io/projected/af83e0d3-5874-46e3-99be-f19e070e8ef9-kube-api-access-rblpr\") pod \"frr-k8s-webhook-server-7df86c4f6c-nrxvf\" (UID: \"af83e0d3-5874-46e3-99be-f19e070e8ef9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nrxvf" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.558365 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbz8m\" (UniqueName: \"kubernetes.io/projected/08f8add3-b808-4eb1-a512-d955d30091ee-kube-api-access-lbz8m\") pod \"frr-k8s-d8nd5\" (UID: \"08f8add3-b808-4eb1-a512-d955d30091ee\") " pod="metallb-system/frr-k8s-d8nd5" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.639935 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/55f3dbda-02f8-4612-8c4a-94e97e23f42a-memberlist\") pod \"speaker-qmqmm\" (UID: \"55f3dbda-02f8-4612-8c4a-94e97e23f42a\") " pod="metallb-system/speaker-qmqmm" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.639987 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/62c067e3-37a6-4b48-bb7c-d94b46740ba9-cert\") pod \"controller-6968d8fdc4-n5x89\" (UID: \"62c067e3-37a6-4b48-bb7c-d94b46740ba9\") " pod="metallb-system/controller-6968d8fdc4-n5x89" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.640058 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/55f3dbda-02f8-4612-8c4a-94e97e23f42a-metallb-excludel2\") pod \"speaker-qmqmm\" (UID: \"55f3dbda-02f8-4612-8c4a-94e97e23f42a\") " pod="metallb-system/speaker-qmqmm" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.640106 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78z87\" (UniqueName: \"kubernetes.io/projected/62c067e3-37a6-4b48-bb7c-d94b46740ba9-kube-api-access-78z87\") pod \"controller-6968d8fdc4-n5x89\" (UID: \"62c067e3-37a6-4b48-bb7c-d94b46740ba9\") " pod="metallb-system/controller-6968d8fdc4-n5x89" Jan 22 10:38:33 crc kubenswrapper[4752]: E0122 10:38:33.640108 4752 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.640135 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/62c067e3-37a6-4b48-bb7c-d94b46740ba9-metrics-certs\") pod \"controller-6968d8fdc4-n5x89\" (UID: \"62c067e3-37a6-4b48-bb7c-d94b46740ba9\") " pod="metallb-system/controller-6968d8fdc4-n5x89" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.640172 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdztc\" (UniqueName: \"kubernetes.io/projected/55f3dbda-02f8-4612-8c4a-94e97e23f42a-kube-api-access-vdztc\") pod \"speaker-qmqmm\" (UID: \"55f3dbda-02f8-4612-8c4a-94e97e23f42a\") " pod="metallb-system/speaker-qmqmm" Jan 22 10:38:33 crc kubenswrapper[4752]: E0122 10:38:33.640186 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55f3dbda-02f8-4612-8c4a-94e97e23f42a-memberlist podName:55f3dbda-02f8-4612-8c4a-94e97e23f42a nodeName:}" failed. No retries permitted until 2026-01-22 10:38:34.140167612 +0000 UTC m=+793.370110520 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/55f3dbda-02f8-4612-8c4a-94e97e23f42a-memberlist") pod "speaker-qmqmm" (UID: "55f3dbda-02f8-4612-8c4a-94e97e23f42a") : secret "metallb-memberlist" not found Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.640213 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/55f3dbda-02f8-4612-8c4a-94e97e23f42a-metrics-certs\") pod \"speaker-qmqmm\" (UID: \"55f3dbda-02f8-4612-8c4a-94e97e23f42a\") " pod="metallb-system/speaker-qmqmm" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.640796 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/55f3dbda-02f8-4612-8c4a-94e97e23f42a-metallb-excludel2\") pod \"speaker-qmqmm\" (UID: \"55f3dbda-02f8-4612-8c4a-94e97e23f42a\") " pod="metallb-system/speaker-qmqmm" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.642494 4752 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.644175 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/62c067e3-37a6-4b48-bb7c-d94b46740ba9-metrics-certs\") pod \"controller-6968d8fdc4-n5x89\" (UID: \"62c067e3-37a6-4b48-bb7c-d94b46740ba9\") " pod="metallb-system/controller-6968d8fdc4-n5x89" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.644357 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/55f3dbda-02f8-4612-8c4a-94e97e23f42a-metrics-certs\") pod \"speaker-qmqmm\" (UID: \"55f3dbda-02f8-4612-8c4a-94e97e23f42a\") " pod="metallb-system/speaker-qmqmm" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.652985 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/62c067e3-37a6-4b48-bb7c-d94b46740ba9-cert\") pod \"controller-6968d8fdc4-n5x89\" (UID: \"62c067e3-37a6-4b48-bb7c-d94b46740ba9\") " pod="metallb-system/controller-6968d8fdc4-n5x89" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.658043 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78z87\" (UniqueName: \"kubernetes.io/projected/62c067e3-37a6-4b48-bb7c-d94b46740ba9-kube-api-access-78z87\") pod \"controller-6968d8fdc4-n5x89\" (UID: \"62c067e3-37a6-4b48-bb7c-d94b46740ba9\") " pod="metallb-system/controller-6968d8fdc4-n5x89" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.660022 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdztc\" (UniqueName: \"kubernetes.io/projected/55f3dbda-02f8-4612-8c4a-94e97e23f42a-kube-api-access-vdztc\") pod \"speaker-qmqmm\" (UID: \"55f3dbda-02f8-4612-8c4a-94e97e23f42a\") " pod="metallb-system/speaker-qmqmm" Jan 22 10:38:33 crc kubenswrapper[4752]: I0122 10:38:33.789205 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-n5x89" Jan 22 10:38:34 crc kubenswrapper[4752]: I0122 10:38:34.046255 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af83e0d3-5874-46e3-99be-f19e070e8ef9-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-nrxvf\" (UID: \"af83e0d3-5874-46e3-99be-f19e070e8ef9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nrxvf" Jan 22 10:38:34 crc kubenswrapper[4752]: I0122 10:38:34.052757 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af83e0d3-5874-46e3-99be-f19e070e8ef9-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-nrxvf\" (UID: \"af83e0d3-5874-46e3-99be-f19e070e8ef9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nrxvf" Jan 22 10:38:34 crc kubenswrapper[4752]: I0122 10:38:34.149611 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/55f3dbda-02f8-4612-8c4a-94e97e23f42a-memberlist\") pod \"speaker-qmqmm\" (UID: \"55f3dbda-02f8-4612-8c4a-94e97e23f42a\") " pod="metallb-system/speaker-qmqmm" Jan 22 10:38:34 crc kubenswrapper[4752]: E0122 10:38:34.149833 4752 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 22 10:38:34 crc kubenswrapper[4752]: E0122 10:38:34.149949 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55f3dbda-02f8-4612-8c4a-94e97e23f42a-memberlist podName:55f3dbda-02f8-4612-8c4a-94e97e23f42a nodeName:}" failed. No retries permitted until 2026-01-22 10:38:35.149925648 +0000 UTC m=+794.379868576 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/55f3dbda-02f8-4612-8c4a-94e97e23f42a-memberlist") pod "speaker-qmqmm" (UID: "55f3dbda-02f8-4612-8c4a-94e97e23f42a") : secret "metallb-memberlist" not found Jan 22 10:38:34 crc kubenswrapper[4752]: I0122 10:38:34.188867 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-n5x89"] Jan 22 10:38:34 crc kubenswrapper[4752]: I0122 10:38:34.325680 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 22 10:38:34 crc kubenswrapper[4752]: I0122 10:38:34.329395 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/08f8add3-b808-4eb1-a512-d955d30091ee-frr-startup\") pod \"frr-k8s-d8nd5\" (UID: \"08f8add3-b808-4eb1-a512-d955d30091ee\") " pod="metallb-system/frr-k8s-d8nd5" Jan 22 10:38:34 crc kubenswrapper[4752]: E0122 10:38:34.538261 4752 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: failed to sync secret cache: timed out waiting for the condition Jan 22 10:38:34 crc kubenswrapper[4752]: E0122 10:38:34.538412 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08f8add3-b808-4eb1-a512-d955d30091ee-metrics-certs podName:08f8add3-b808-4eb1-a512-d955d30091ee nodeName:}" failed. No retries permitted until 2026-01-22 10:38:35.038378388 +0000 UTC m=+794.268321326 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/08f8add3-b808-4eb1-a512-d955d30091ee-metrics-certs") pod "frr-k8s-d8nd5" (UID: "08f8add3-b808-4eb1-a512-d955d30091ee") : failed to sync secret cache: timed out waiting for the condition Jan 22 10:38:34 crc kubenswrapper[4752]: I0122 10:38:34.724726 4752 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-5s7d6" Jan 22 10:38:34 crc kubenswrapper[4752]: I0122 10:38:34.726446 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nrxvf" Jan 22 10:38:34 crc kubenswrapper[4752]: I0122 10:38:34.747177 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-n5x89" event={"ID":"62c067e3-37a6-4b48-bb7c-d94b46740ba9","Type":"ContainerStarted","Data":"97330e9866503fbe031b4f153ba1947c7a842ed4eaa397d756eeecd6aab8b9a7"} Jan 22 10:38:34 crc kubenswrapper[4752]: I0122 10:38:34.747263 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-n5x89" event={"ID":"62c067e3-37a6-4b48-bb7c-d94b46740ba9","Type":"ContainerStarted","Data":"17bc5dc300d4a360813dab21a5869e0a9cd51de9d8d60dfcb6e6681b50d16303"} Jan 22 10:38:34 crc kubenswrapper[4752]: I0122 10:38:34.747298 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-n5x89" event={"ID":"62c067e3-37a6-4b48-bb7c-d94b46740ba9","Type":"ContainerStarted","Data":"8cdac9d5fd98159eba660c4d51f66d5df0b11a164164c8c9e0cd7552f12a372b"} Jan 22 10:38:34 crc kubenswrapper[4752]: I0122 10:38:34.747365 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-n5x89" Jan 22 10:38:34 crc kubenswrapper[4752]: I0122 10:38:34.783021 4752 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 22 10:38:34 crc kubenswrapper[4752]: I0122 10:38:34.791437 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-n5x89" podStartSLOduration=1.791411351 podStartE2EDuration="1.791411351s" podCreationTimestamp="2026-01-22 10:38:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:38:34.782781058 +0000 UTC m=+794.012724016" watchObservedRunningTime="2026-01-22 10:38:34.791411351 +0000 UTC m=+794.021354269" Jan 22 10:38:34 crc kubenswrapper[4752]: I0122 10:38:34.980174 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-nrxvf"] Jan 22 10:38:34 crc kubenswrapper[4752]: W0122 10:38:34.993096 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf83e0d3_5874_46e3_99be_f19e070e8ef9.slice/crio-15ceefb92cbc429125ed92f63791bce9f480cb1a5fd238dcbc80b8014bb49f9e WatchSource:0}: Error finding container 15ceefb92cbc429125ed92f63791bce9f480cb1a5fd238dcbc80b8014bb49f9e: Status 404 returned error can't find the container with id 15ceefb92cbc429125ed92f63791bce9f480cb1a5fd238dcbc80b8014bb49f9e Jan 22 10:38:35 crc kubenswrapper[4752]: I0122 10:38:35.064773 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08f8add3-b808-4eb1-a512-d955d30091ee-metrics-certs\") pod \"frr-k8s-d8nd5\" (UID: \"08f8add3-b808-4eb1-a512-d955d30091ee\") " pod="metallb-system/frr-k8s-d8nd5" Jan 22 10:38:35 crc kubenswrapper[4752]: I0122 10:38:35.071294 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08f8add3-b808-4eb1-a512-d955d30091ee-metrics-certs\") pod \"frr-k8s-d8nd5\" (UID: \"08f8add3-b808-4eb1-a512-d955d30091ee\") " pod="metallb-system/frr-k8s-d8nd5" Jan 22 10:38:35 crc kubenswrapper[4752]: I0122 10:38:35.165832 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/55f3dbda-02f8-4612-8c4a-94e97e23f42a-memberlist\") pod \"speaker-qmqmm\" (UID: \"55f3dbda-02f8-4612-8c4a-94e97e23f42a\") " pod="metallb-system/speaker-qmqmm" Jan 22 10:38:35 crc kubenswrapper[4752]: I0122 10:38:35.170388 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/55f3dbda-02f8-4612-8c4a-94e97e23f42a-memberlist\") pod \"speaker-qmqmm\" (UID: \"55f3dbda-02f8-4612-8c4a-94e97e23f42a\") " pod="metallb-system/speaker-qmqmm" Jan 22 10:38:35 crc kubenswrapper[4752]: I0122 10:38:35.197576 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-d8nd5" Jan 22 10:38:35 crc kubenswrapper[4752]: I0122 10:38:35.280062 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-qmqmm" Jan 22 10:38:35 crc kubenswrapper[4752]: I0122 10:38:35.756598 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nrxvf" event={"ID":"af83e0d3-5874-46e3-99be-f19e070e8ef9","Type":"ContainerStarted","Data":"15ceefb92cbc429125ed92f63791bce9f480cb1a5fd238dcbc80b8014bb49f9e"} Jan 22 10:38:35 crc kubenswrapper[4752]: I0122 10:38:35.759890 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qmqmm" event={"ID":"55f3dbda-02f8-4612-8c4a-94e97e23f42a","Type":"ContainerStarted","Data":"72ac03662bd176d80f8bd88567561fa7dfadb45ac0fbad4f2eefa74439e99271"} Jan 22 10:38:35 crc kubenswrapper[4752]: I0122 10:38:35.759928 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qmqmm" event={"ID":"55f3dbda-02f8-4612-8c4a-94e97e23f42a","Type":"ContainerStarted","Data":"e1efa558ee67d944012a4efc8a964bcdaba7432959dcd66e3ef322f05799a8fa"} Jan 22 10:38:35 crc kubenswrapper[4752]: I0122 10:38:35.762832 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d8nd5" event={"ID":"08f8add3-b808-4eb1-a512-d955d30091ee","Type":"ContainerStarted","Data":"de42e1fb067584d1447dc176c811240780dc741a9c0a1e5fb6a67b48b961f70c"} Jan 22 10:38:36 crc kubenswrapper[4752]: I0122 10:38:36.776061 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qmqmm" event={"ID":"55f3dbda-02f8-4612-8c4a-94e97e23f42a","Type":"ContainerStarted","Data":"e1cb980ca670d7313abab54a9662ad6584368e4c332a7f3c8047736f0e5b25ac"} Jan 22 10:38:36 crc kubenswrapper[4752]: I0122 10:38:36.776390 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-qmqmm" Jan 22 10:38:41 crc kubenswrapper[4752]: I0122 10:38:41.120060 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-qmqmm" podStartSLOduration=8.120043576 podStartE2EDuration="8.120043576s" podCreationTimestamp="2026-01-22 10:38:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:38:36.809353645 +0000 UTC m=+796.039296563" watchObservedRunningTime="2026-01-22 10:38:41.120043576 +0000 UTC m=+800.349986474" Jan 22 10:38:42 crc kubenswrapper[4752]: I0122 10:38:42.828675 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nrxvf" event={"ID":"af83e0d3-5874-46e3-99be-f19e070e8ef9","Type":"ContainerStarted","Data":"d52eee2536b293b23addfc1f9cff843e56bfd167b906a453d637324e59ac6041"} Jan 22 10:38:42 crc kubenswrapper[4752]: I0122 10:38:42.829589 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nrxvf" Jan 22 10:38:42 crc kubenswrapper[4752]: I0122 10:38:42.831688 4752 generic.go:334] "Generic (PLEG): container finished" podID="08f8add3-b808-4eb1-a512-d955d30091ee" containerID="29e030185fe012b4ee61b12e8775796fab9d8ea99559bf715f675735475ae909" exitCode=0 Jan 22 10:38:42 crc kubenswrapper[4752]: I0122 10:38:42.831798 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d8nd5" event={"ID":"08f8add3-b808-4eb1-a512-d955d30091ee","Type":"ContainerDied","Data":"29e030185fe012b4ee61b12e8775796fab9d8ea99559bf715f675735475ae909"} Jan 22 10:38:42 crc kubenswrapper[4752]: I0122 10:38:42.861709 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nrxvf" podStartSLOduration=3.078653051 podStartE2EDuration="9.861684888s" podCreationTimestamp="2026-01-22 10:38:33 +0000 UTC" firstStartedPulling="2026-01-22 10:38:34.994881945 +0000 UTC m=+794.224824853" lastFinishedPulling="2026-01-22 10:38:41.777913782 +0000 UTC m=+801.007856690" observedRunningTime="2026-01-22 10:38:42.857258299 +0000 UTC m=+802.087201227" watchObservedRunningTime="2026-01-22 10:38:42.861684888 +0000 UTC m=+802.091627846" Jan 22 10:38:43 crc kubenswrapper[4752]: I0122 10:38:43.844247 4752 generic.go:334] "Generic (PLEG): container finished" podID="08f8add3-b808-4eb1-a512-d955d30091ee" containerID="097a4ce8ad413bbfaaddba34de04c29c1e17d4f4c09e703664ba426a8dbe3c1e" exitCode=0 Jan 22 10:38:43 crc kubenswrapper[4752]: I0122 10:38:43.844439 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d8nd5" event={"ID":"08f8add3-b808-4eb1-a512-d955d30091ee","Type":"ContainerDied","Data":"097a4ce8ad413bbfaaddba34de04c29c1e17d4f4c09e703664ba426a8dbe3c1e"} Jan 22 10:38:44 crc kubenswrapper[4752]: I0122 10:38:44.854995 4752 generic.go:334] "Generic (PLEG): container finished" podID="08f8add3-b808-4eb1-a512-d955d30091ee" containerID="4f8b0c6c7e1d76690cd7fd6383c5af774197ac78e4bdc594da2018f4200b559f" exitCode=0 Jan 22 10:38:44 crc kubenswrapper[4752]: I0122 10:38:44.855114 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d8nd5" event={"ID":"08f8add3-b808-4eb1-a512-d955d30091ee","Type":"ContainerDied","Data":"4f8b0c6c7e1d76690cd7fd6383c5af774197ac78e4bdc594da2018f4200b559f"} Jan 22 10:38:45 crc kubenswrapper[4752]: I0122 10:38:45.284060 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-qmqmm" Jan 22 10:38:45 crc kubenswrapper[4752]: I0122 10:38:45.868582 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d8nd5" event={"ID":"08f8add3-b808-4eb1-a512-d955d30091ee","Type":"ContainerStarted","Data":"35d9936c59a3b2108955940e60031103bd8157224175b21f19aeea1788d9829e"} Jan 22 10:38:45 crc kubenswrapper[4752]: I0122 10:38:45.868898 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d8nd5" event={"ID":"08f8add3-b808-4eb1-a512-d955d30091ee","Type":"ContainerStarted","Data":"6933c8ea9c4e14b591ec5ea66ed7117a2f2a6be9ef607bc35af801c7f89ed929"} Jan 22 10:38:45 crc kubenswrapper[4752]: I0122 10:38:45.868913 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d8nd5" event={"ID":"08f8add3-b808-4eb1-a512-d955d30091ee","Type":"ContainerStarted","Data":"886a8c2c4c4477d3d07db13b39ca1da3655e634e6d88ad34dcf919d3f0d04d1a"} Jan 22 10:38:45 crc kubenswrapper[4752]: I0122 10:38:45.868923 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d8nd5" event={"ID":"08f8add3-b808-4eb1-a512-d955d30091ee","Type":"ContainerStarted","Data":"fb5f9b7fc72b3c3a4aa800286c11f7744c0b2670e70cf2a6731eccdd9f51b77b"} Jan 22 10:38:45 crc kubenswrapper[4752]: I0122 10:38:45.868932 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d8nd5" event={"ID":"08f8add3-b808-4eb1-a512-d955d30091ee","Type":"ContainerStarted","Data":"a15222536fe02dc2486f7b4558399ecdffa43d58816c5925a0b577032d783ec3"} Jan 22 10:38:46 crc kubenswrapper[4752]: I0122 10:38:46.881640 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d8nd5" event={"ID":"08f8add3-b808-4eb1-a512-d955d30091ee","Type":"ContainerStarted","Data":"6d57f42506d5fef3cb2bde40a010f2c5ec000da595add8ab0eb8837a9a44b566"} Jan 22 10:38:46 crc kubenswrapper[4752]: I0122 10:38:46.881824 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-d8nd5" Jan 22 10:38:46 crc kubenswrapper[4752]: I0122 10:38:46.923838 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-d8nd5" podStartSLOduration=7.467767091 podStartE2EDuration="13.923815547s" podCreationTimestamp="2026-01-22 10:38:33 +0000 UTC" firstStartedPulling="2026-01-22 10:38:35.298593067 +0000 UTC m=+794.528535975" lastFinishedPulling="2026-01-22 10:38:41.754641523 +0000 UTC m=+800.984584431" observedRunningTime="2026-01-22 10:38:46.914113555 +0000 UTC m=+806.144056503" watchObservedRunningTime="2026-01-22 10:38:46.923815547 +0000 UTC m=+806.153758485" Jan 22 10:38:48 crc kubenswrapper[4752]: I0122 10:38:48.514532 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-fxbwx"] Jan 22 10:38:48 crc kubenswrapper[4752]: I0122 10:38:48.515718 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fxbwx" Jan 22 10:38:48 crc kubenswrapper[4752]: I0122 10:38:48.519159 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 22 10:38:48 crc kubenswrapper[4752]: I0122 10:38:48.527670 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fxbwx"] Jan 22 10:38:48 crc kubenswrapper[4752]: I0122 10:38:48.528096 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-rqjwr" Jan 22 10:38:48 crc kubenswrapper[4752]: I0122 10:38:48.528101 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 22 10:38:48 crc kubenswrapper[4752]: I0122 10:38:48.660527 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm4tg\" (UniqueName: \"kubernetes.io/projected/13db92cb-70c7-40a4-ada0-48332e8592ee-kube-api-access-mm4tg\") pod \"openstack-operator-index-fxbwx\" (UID: \"13db92cb-70c7-40a4-ada0-48332e8592ee\") " pod="openstack-operators/openstack-operator-index-fxbwx" Jan 22 10:38:48 crc kubenswrapper[4752]: I0122 10:38:48.761759 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm4tg\" (UniqueName: \"kubernetes.io/projected/13db92cb-70c7-40a4-ada0-48332e8592ee-kube-api-access-mm4tg\") pod \"openstack-operator-index-fxbwx\" (UID: \"13db92cb-70c7-40a4-ada0-48332e8592ee\") " pod="openstack-operators/openstack-operator-index-fxbwx" Jan 22 10:38:48 crc kubenswrapper[4752]: I0122 10:38:48.795352 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm4tg\" (UniqueName: \"kubernetes.io/projected/13db92cb-70c7-40a4-ada0-48332e8592ee-kube-api-access-mm4tg\") pod \"openstack-operator-index-fxbwx\" (UID: \"13db92cb-70c7-40a4-ada0-48332e8592ee\") " pod="openstack-operators/openstack-operator-index-fxbwx" Jan 22 10:38:48 crc kubenswrapper[4752]: I0122 10:38:48.838344 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fxbwx" Jan 22 10:38:49 crc kubenswrapper[4752]: I0122 10:38:49.348526 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fxbwx"] Jan 22 10:38:49 crc kubenswrapper[4752]: I0122 10:38:49.910468 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fxbwx" event={"ID":"13db92cb-70c7-40a4-ada0-48332e8592ee","Type":"ContainerStarted","Data":"3d651081c765bbf6986fcf291184420c0eac577afa82fa6730de60eafa0658d3"} Jan 22 10:38:50 crc kubenswrapper[4752]: I0122 10:38:50.198389 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-d8nd5" Jan 22 10:38:50 crc kubenswrapper[4752]: I0122 10:38:50.256622 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-d8nd5" Jan 22 10:38:51 crc kubenswrapper[4752]: I0122 10:38:51.928690 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fxbwx" event={"ID":"13db92cb-70c7-40a4-ada0-48332e8592ee","Type":"ContainerStarted","Data":"6eccf2e3923b25b8fb092cbf656c614efddb25c87c5c5c13e183b5c65b9ff0d6"} Jan 22 10:38:51 crc kubenswrapper[4752]: I0122 10:38:51.944100 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-fxbwx" podStartSLOduration=1.605545297 podStartE2EDuration="3.94407933s" podCreationTimestamp="2026-01-22 10:38:48 +0000 UTC" firstStartedPulling="2026-01-22 10:38:49.353995254 +0000 UTC m=+808.583938182" lastFinishedPulling="2026-01-22 10:38:51.692529307 +0000 UTC m=+810.922472215" observedRunningTime="2026-01-22 10:38:51.940773611 +0000 UTC m=+811.170716539" watchObservedRunningTime="2026-01-22 10:38:51.94407933 +0000 UTC m=+811.174022238" Jan 22 10:38:52 crc kubenswrapper[4752]: I0122 10:38:52.679077 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-fxbwx"] Jan 22 10:38:53 crc kubenswrapper[4752]: I0122 10:38:53.297130 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-kq5fw"] Jan 22 10:38:53 crc kubenswrapper[4752]: I0122 10:38:53.298643 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kq5fw" Jan 22 10:38:53 crc kubenswrapper[4752]: I0122 10:38:53.307928 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kq5fw"] Jan 22 10:38:53 crc kubenswrapper[4752]: I0122 10:38:53.425742 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6djkf\" (UniqueName: \"kubernetes.io/projected/5431f278-6f91-45c5-8ab3-e911e41481fc-kube-api-access-6djkf\") pod \"openstack-operator-index-kq5fw\" (UID: \"5431f278-6f91-45c5-8ab3-e911e41481fc\") " pod="openstack-operators/openstack-operator-index-kq5fw" Jan 22 10:38:53 crc kubenswrapper[4752]: I0122 10:38:53.527013 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6djkf\" (UniqueName: \"kubernetes.io/projected/5431f278-6f91-45c5-8ab3-e911e41481fc-kube-api-access-6djkf\") pod \"openstack-operator-index-kq5fw\" (UID: \"5431f278-6f91-45c5-8ab3-e911e41481fc\") " pod="openstack-operators/openstack-operator-index-kq5fw" Jan 22 10:38:53 crc kubenswrapper[4752]: I0122 10:38:53.547107 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6djkf\" (UniqueName: \"kubernetes.io/projected/5431f278-6f91-45c5-8ab3-e911e41481fc-kube-api-access-6djkf\") pod \"openstack-operator-index-kq5fw\" (UID: \"5431f278-6f91-45c5-8ab3-e911e41481fc\") " pod="openstack-operators/openstack-operator-index-kq5fw" Jan 22 10:38:53 crc kubenswrapper[4752]: I0122 10:38:53.626257 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kq5fw" Jan 22 10:38:53 crc kubenswrapper[4752]: I0122 10:38:53.802957 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-n5x89" Jan 22 10:38:53 crc kubenswrapper[4752]: I0122 10:38:53.852615 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kq5fw"] Jan 22 10:38:53 crc kubenswrapper[4752]: I0122 10:38:53.959683 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kq5fw" event={"ID":"5431f278-6f91-45c5-8ab3-e911e41481fc","Type":"ContainerStarted","Data":"f123e789b9e21131a1898f8eee8a081302ace38ffcbb81fdf74253fe7b1feb35"} Jan 22 10:38:53 crc kubenswrapper[4752]: I0122 10:38:53.959980 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-fxbwx" podUID="13db92cb-70c7-40a4-ada0-48332e8592ee" containerName="registry-server" containerID="cri-o://6eccf2e3923b25b8fb092cbf656c614efddb25c87c5c5c13e183b5c65b9ff0d6" gracePeriod=2 Jan 22 10:38:54 crc kubenswrapper[4752]: I0122 10:38:54.371752 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fxbwx" Jan 22 10:38:54 crc kubenswrapper[4752]: I0122 10:38:54.443483 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mm4tg\" (UniqueName: \"kubernetes.io/projected/13db92cb-70c7-40a4-ada0-48332e8592ee-kube-api-access-mm4tg\") pod \"13db92cb-70c7-40a4-ada0-48332e8592ee\" (UID: \"13db92cb-70c7-40a4-ada0-48332e8592ee\") " Jan 22 10:38:54 crc kubenswrapper[4752]: I0122 10:38:54.449152 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13db92cb-70c7-40a4-ada0-48332e8592ee-kube-api-access-mm4tg" (OuterVolumeSpecName: "kube-api-access-mm4tg") pod "13db92cb-70c7-40a4-ada0-48332e8592ee" (UID: "13db92cb-70c7-40a4-ada0-48332e8592ee"). InnerVolumeSpecName "kube-api-access-mm4tg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:38:54 crc kubenswrapper[4752]: I0122 10:38:54.545030 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mm4tg\" (UniqueName: \"kubernetes.io/projected/13db92cb-70c7-40a4-ada0-48332e8592ee-kube-api-access-mm4tg\") on node \"crc\" DevicePath \"\"" Jan 22 10:38:54 crc kubenswrapper[4752]: I0122 10:38:54.733904 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-nrxvf" Jan 22 10:38:54 crc kubenswrapper[4752]: I0122 10:38:54.968317 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kq5fw" event={"ID":"5431f278-6f91-45c5-8ab3-e911e41481fc","Type":"ContainerStarted","Data":"3be60adcc3ea523cae19b5b5622901ab70f9be6008a1c2214feeb927369af063"} Jan 22 10:38:54 crc kubenswrapper[4752]: I0122 10:38:54.970056 4752 generic.go:334] "Generic (PLEG): container finished" podID="13db92cb-70c7-40a4-ada0-48332e8592ee" containerID="6eccf2e3923b25b8fb092cbf656c614efddb25c87c5c5c13e183b5c65b9ff0d6" exitCode=0 Jan 22 10:38:54 crc kubenswrapper[4752]: I0122 10:38:54.970093 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fxbwx" event={"ID":"13db92cb-70c7-40a4-ada0-48332e8592ee","Type":"ContainerDied","Data":"6eccf2e3923b25b8fb092cbf656c614efddb25c87c5c5c13e183b5c65b9ff0d6"} Jan 22 10:38:54 crc kubenswrapper[4752]: I0122 10:38:54.970113 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fxbwx" event={"ID":"13db92cb-70c7-40a4-ada0-48332e8592ee","Type":"ContainerDied","Data":"3d651081c765bbf6986fcf291184420c0eac577afa82fa6730de60eafa0658d3"} Jan 22 10:38:54 crc kubenswrapper[4752]: I0122 10:38:54.970132 4752 scope.go:117] "RemoveContainer" containerID="6eccf2e3923b25b8fb092cbf656c614efddb25c87c5c5c13e183b5c65b9ff0d6" Jan 22 10:38:54 crc kubenswrapper[4752]: I0122 10:38:54.970243 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fxbwx" Jan 22 10:38:55 crc kubenswrapper[4752]: I0122 10:38:55.002003 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-kq5fw" podStartSLOduration=1.925174444 podStartE2EDuration="2.001967958s" podCreationTimestamp="2026-01-22 10:38:53 +0000 UTC" firstStartedPulling="2026-01-22 10:38:53.864682775 +0000 UTC m=+813.094625683" lastFinishedPulling="2026-01-22 10:38:53.941476279 +0000 UTC m=+813.171419197" observedRunningTime="2026-01-22 10:38:54.991818684 +0000 UTC m=+814.221761592" watchObservedRunningTime="2026-01-22 10:38:55.001967958 +0000 UTC m=+814.231910916" Jan 22 10:38:55 crc kubenswrapper[4752]: I0122 10:38:55.042514 4752 scope.go:117] "RemoveContainer" containerID="6eccf2e3923b25b8fb092cbf656c614efddb25c87c5c5c13e183b5c65b9ff0d6" Jan 22 10:38:55 crc kubenswrapper[4752]: I0122 10:38:55.046472 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-fxbwx"] Jan 22 10:38:55 crc kubenswrapper[4752]: E0122 10:38:55.052562 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6eccf2e3923b25b8fb092cbf656c614efddb25c87c5c5c13e183b5c65b9ff0d6\": container with ID starting with 6eccf2e3923b25b8fb092cbf656c614efddb25c87c5c5c13e183b5c65b9ff0d6 not found: ID does not exist" containerID="6eccf2e3923b25b8fb092cbf656c614efddb25c87c5c5c13e183b5c65b9ff0d6" Jan 22 10:38:55 crc kubenswrapper[4752]: I0122 10:38:55.052569 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-fxbwx"] Jan 22 10:38:55 crc kubenswrapper[4752]: I0122 10:38:55.052619 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6eccf2e3923b25b8fb092cbf656c614efddb25c87c5c5c13e183b5c65b9ff0d6"} err="failed to get container status \"6eccf2e3923b25b8fb092cbf656c614efddb25c87c5c5c13e183b5c65b9ff0d6\": rpc error: code = NotFound desc = could not find container \"6eccf2e3923b25b8fb092cbf656c614efddb25c87c5c5c13e183b5c65b9ff0d6\": container with ID starting with 6eccf2e3923b25b8fb092cbf656c614efddb25c87c5c5c13e183b5c65b9ff0d6 not found: ID does not exist" Jan 22 10:38:55 crc kubenswrapper[4752]: I0122 10:38:55.111501 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13db92cb-70c7-40a4-ada0-48332e8592ee" path="/var/lib/kubelet/pods/13db92cb-70c7-40a4-ada0-48332e8592ee/volumes" Jan 22 10:38:55 crc kubenswrapper[4752]: I0122 10:38:55.201748 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-d8nd5" Jan 22 10:38:57 crc kubenswrapper[4752]: I0122 10:38:57.723608 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:38:57 crc kubenswrapper[4752]: I0122 10:38:57.724432 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:39:03 crc kubenswrapper[4752]: I0122 10:39:03.627062 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-kq5fw" Jan 22 10:39:03 crc kubenswrapper[4752]: I0122 10:39:03.627419 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-kq5fw" Jan 22 10:39:03 crc kubenswrapper[4752]: I0122 10:39:03.658958 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-kq5fw" Jan 22 10:39:04 crc kubenswrapper[4752]: I0122 10:39:04.077762 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-kq5fw" Jan 22 10:39:11 crc kubenswrapper[4752]: I0122 10:39:11.435327 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/457d0a51f54525a262d5efe2d61a251845b3d41b95aab32496c4e56f8f2b9nz"] Jan 22 10:39:11 crc kubenswrapper[4752]: E0122 10:39:11.436596 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13db92cb-70c7-40a4-ada0-48332e8592ee" containerName="registry-server" Jan 22 10:39:11 crc kubenswrapper[4752]: I0122 10:39:11.436620 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="13db92cb-70c7-40a4-ada0-48332e8592ee" containerName="registry-server" Jan 22 10:39:11 crc kubenswrapper[4752]: I0122 10:39:11.436887 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="13db92cb-70c7-40a4-ada0-48332e8592ee" containerName="registry-server" Jan 22 10:39:11 crc kubenswrapper[4752]: I0122 10:39:11.438329 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/457d0a51f54525a262d5efe2d61a251845b3d41b95aab32496c4e56f8f2b9nz" Jan 22 10:39:11 crc kubenswrapper[4752]: I0122 10:39:11.441070 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-bnt9l" Jan 22 10:39:11 crc kubenswrapper[4752]: I0122 10:39:11.451664 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/457d0a51f54525a262d5efe2d61a251845b3d41b95aab32496c4e56f8f2b9nz"] Jan 22 10:39:11 crc kubenswrapper[4752]: I0122 10:39:11.586395 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a490ee99-f6e3-4b00-8fad-8bdb6f7aa276-util\") pod \"457d0a51f54525a262d5efe2d61a251845b3d41b95aab32496c4e56f8f2b9nz\" (UID: \"a490ee99-f6e3-4b00-8fad-8bdb6f7aa276\") " pod="openstack-operators/457d0a51f54525a262d5efe2d61a251845b3d41b95aab32496c4e56f8f2b9nz" Jan 22 10:39:11 crc kubenswrapper[4752]: I0122 10:39:11.586635 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a490ee99-f6e3-4b00-8fad-8bdb6f7aa276-bundle\") pod \"457d0a51f54525a262d5efe2d61a251845b3d41b95aab32496c4e56f8f2b9nz\" (UID: \"a490ee99-f6e3-4b00-8fad-8bdb6f7aa276\") " pod="openstack-operators/457d0a51f54525a262d5efe2d61a251845b3d41b95aab32496c4e56f8f2b9nz" Jan 22 10:39:11 crc kubenswrapper[4752]: I0122 10:39:11.586707 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwnsz\" (UniqueName: \"kubernetes.io/projected/a490ee99-f6e3-4b00-8fad-8bdb6f7aa276-kube-api-access-kwnsz\") pod \"457d0a51f54525a262d5efe2d61a251845b3d41b95aab32496c4e56f8f2b9nz\" (UID: \"a490ee99-f6e3-4b00-8fad-8bdb6f7aa276\") " pod="openstack-operators/457d0a51f54525a262d5efe2d61a251845b3d41b95aab32496c4e56f8f2b9nz" Jan 22 10:39:11 crc kubenswrapper[4752]: I0122 10:39:11.688662 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a490ee99-f6e3-4b00-8fad-8bdb6f7aa276-bundle\") pod \"457d0a51f54525a262d5efe2d61a251845b3d41b95aab32496c4e56f8f2b9nz\" (UID: \"a490ee99-f6e3-4b00-8fad-8bdb6f7aa276\") " pod="openstack-operators/457d0a51f54525a262d5efe2d61a251845b3d41b95aab32496c4e56f8f2b9nz" Jan 22 10:39:11 crc kubenswrapper[4752]: I0122 10:39:11.689194 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwnsz\" (UniqueName: \"kubernetes.io/projected/a490ee99-f6e3-4b00-8fad-8bdb6f7aa276-kube-api-access-kwnsz\") pod \"457d0a51f54525a262d5efe2d61a251845b3d41b95aab32496c4e56f8f2b9nz\" (UID: \"a490ee99-f6e3-4b00-8fad-8bdb6f7aa276\") " pod="openstack-operators/457d0a51f54525a262d5efe2d61a251845b3d41b95aab32496c4e56f8f2b9nz" Jan 22 10:39:11 crc kubenswrapper[4752]: I0122 10:39:11.689433 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a490ee99-f6e3-4b00-8fad-8bdb6f7aa276-util\") pod \"457d0a51f54525a262d5efe2d61a251845b3d41b95aab32496c4e56f8f2b9nz\" (UID: \"a490ee99-f6e3-4b00-8fad-8bdb6f7aa276\") " pod="openstack-operators/457d0a51f54525a262d5efe2d61a251845b3d41b95aab32496c4e56f8f2b9nz" Jan 22 10:39:11 crc kubenswrapper[4752]: I0122 10:39:11.690608 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a490ee99-f6e3-4b00-8fad-8bdb6f7aa276-util\") pod \"457d0a51f54525a262d5efe2d61a251845b3d41b95aab32496c4e56f8f2b9nz\" (UID: \"a490ee99-f6e3-4b00-8fad-8bdb6f7aa276\") " pod="openstack-operators/457d0a51f54525a262d5efe2d61a251845b3d41b95aab32496c4e56f8f2b9nz" Jan 22 10:39:11 crc kubenswrapper[4752]: I0122 10:39:11.691522 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a490ee99-f6e3-4b00-8fad-8bdb6f7aa276-bundle\") pod \"457d0a51f54525a262d5efe2d61a251845b3d41b95aab32496c4e56f8f2b9nz\" (UID: \"a490ee99-f6e3-4b00-8fad-8bdb6f7aa276\") " pod="openstack-operators/457d0a51f54525a262d5efe2d61a251845b3d41b95aab32496c4e56f8f2b9nz" Jan 22 10:39:11 crc kubenswrapper[4752]: I0122 10:39:11.724514 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwnsz\" (UniqueName: \"kubernetes.io/projected/a490ee99-f6e3-4b00-8fad-8bdb6f7aa276-kube-api-access-kwnsz\") pod \"457d0a51f54525a262d5efe2d61a251845b3d41b95aab32496c4e56f8f2b9nz\" (UID: \"a490ee99-f6e3-4b00-8fad-8bdb6f7aa276\") " pod="openstack-operators/457d0a51f54525a262d5efe2d61a251845b3d41b95aab32496c4e56f8f2b9nz" Jan 22 10:39:11 crc kubenswrapper[4752]: I0122 10:39:11.759460 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/457d0a51f54525a262d5efe2d61a251845b3d41b95aab32496c4e56f8f2b9nz" Jan 22 10:39:12 crc kubenswrapper[4752]: I0122 10:39:12.028039 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/457d0a51f54525a262d5efe2d61a251845b3d41b95aab32496c4e56f8f2b9nz"] Jan 22 10:39:12 crc kubenswrapper[4752]: I0122 10:39:12.119285 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/457d0a51f54525a262d5efe2d61a251845b3d41b95aab32496c4e56f8f2b9nz" event={"ID":"a490ee99-f6e3-4b00-8fad-8bdb6f7aa276","Type":"ContainerStarted","Data":"96e6551d01bfcd3e5fccb9dd65211d70138245fa43cfd8ddb5fda35c81c07cbc"} Jan 22 10:39:13 crc kubenswrapper[4752]: I0122 10:39:13.127661 4752 generic.go:334] "Generic (PLEG): container finished" podID="a490ee99-f6e3-4b00-8fad-8bdb6f7aa276" containerID="b6a750f8e317ccc62140509050a222b2909401358ebe107a3b4750ed57b3e6eb" exitCode=0 Jan 22 10:39:13 crc kubenswrapper[4752]: I0122 10:39:13.127725 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/457d0a51f54525a262d5efe2d61a251845b3d41b95aab32496c4e56f8f2b9nz" event={"ID":"a490ee99-f6e3-4b00-8fad-8bdb6f7aa276","Type":"ContainerDied","Data":"b6a750f8e317ccc62140509050a222b2909401358ebe107a3b4750ed57b3e6eb"} Jan 22 10:39:14 crc kubenswrapper[4752]: I0122 10:39:14.137040 4752 generic.go:334] "Generic (PLEG): container finished" podID="a490ee99-f6e3-4b00-8fad-8bdb6f7aa276" containerID="77756768f9a1231b7cb2f7a2a0c9905bd275cbc070377b1b1aeaf7568b791a4a" exitCode=0 Jan 22 10:39:14 crc kubenswrapper[4752]: I0122 10:39:14.137192 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/457d0a51f54525a262d5efe2d61a251845b3d41b95aab32496c4e56f8f2b9nz" event={"ID":"a490ee99-f6e3-4b00-8fad-8bdb6f7aa276","Type":"ContainerDied","Data":"77756768f9a1231b7cb2f7a2a0c9905bd275cbc070377b1b1aeaf7568b791a4a"} Jan 22 10:39:15 crc kubenswrapper[4752]: I0122 10:39:15.149364 4752 generic.go:334] "Generic (PLEG): container finished" podID="a490ee99-f6e3-4b00-8fad-8bdb6f7aa276" containerID="0cd1de09eb147f84919bde6f4fd3ea87c1dc493512e7f29d99462dbd106417f0" exitCode=0 Jan 22 10:39:15 crc kubenswrapper[4752]: I0122 10:39:15.149431 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/457d0a51f54525a262d5efe2d61a251845b3d41b95aab32496c4e56f8f2b9nz" event={"ID":"a490ee99-f6e3-4b00-8fad-8bdb6f7aa276","Type":"ContainerDied","Data":"0cd1de09eb147f84919bde6f4fd3ea87c1dc493512e7f29d99462dbd106417f0"} Jan 22 10:39:16 crc kubenswrapper[4752]: I0122 10:39:16.476765 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/457d0a51f54525a262d5efe2d61a251845b3d41b95aab32496c4e56f8f2b9nz" Jan 22 10:39:16 crc kubenswrapper[4752]: I0122 10:39:16.656506 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwnsz\" (UniqueName: \"kubernetes.io/projected/a490ee99-f6e3-4b00-8fad-8bdb6f7aa276-kube-api-access-kwnsz\") pod \"a490ee99-f6e3-4b00-8fad-8bdb6f7aa276\" (UID: \"a490ee99-f6e3-4b00-8fad-8bdb6f7aa276\") " Jan 22 10:39:16 crc kubenswrapper[4752]: I0122 10:39:16.656631 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a490ee99-f6e3-4b00-8fad-8bdb6f7aa276-util\") pod \"a490ee99-f6e3-4b00-8fad-8bdb6f7aa276\" (UID: \"a490ee99-f6e3-4b00-8fad-8bdb6f7aa276\") " Jan 22 10:39:16 crc kubenswrapper[4752]: I0122 10:39:16.666154 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a490ee99-f6e3-4b00-8fad-8bdb6f7aa276-kube-api-access-kwnsz" (OuterVolumeSpecName: "kube-api-access-kwnsz") pod "a490ee99-f6e3-4b00-8fad-8bdb6f7aa276" (UID: "a490ee99-f6e3-4b00-8fad-8bdb6f7aa276"). InnerVolumeSpecName "kube-api-access-kwnsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:39:16 crc kubenswrapper[4752]: I0122 10:39:16.676006 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a490ee99-f6e3-4b00-8fad-8bdb6f7aa276-bundle\") pod \"a490ee99-f6e3-4b00-8fad-8bdb6f7aa276\" (UID: \"a490ee99-f6e3-4b00-8fad-8bdb6f7aa276\") " Jan 22 10:39:16 crc kubenswrapper[4752]: I0122 10:39:16.676633 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwnsz\" (UniqueName: \"kubernetes.io/projected/a490ee99-f6e3-4b00-8fad-8bdb6f7aa276-kube-api-access-kwnsz\") on node \"crc\" DevicePath \"\"" Jan 22 10:39:16 crc kubenswrapper[4752]: I0122 10:39:16.677297 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a490ee99-f6e3-4b00-8fad-8bdb6f7aa276-bundle" (OuterVolumeSpecName: "bundle") pod "a490ee99-f6e3-4b00-8fad-8bdb6f7aa276" (UID: "a490ee99-f6e3-4b00-8fad-8bdb6f7aa276"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:39:16 crc kubenswrapper[4752]: I0122 10:39:16.689890 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a490ee99-f6e3-4b00-8fad-8bdb6f7aa276-util" (OuterVolumeSpecName: "util") pod "a490ee99-f6e3-4b00-8fad-8bdb6f7aa276" (UID: "a490ee99-f6e3-4b00-8fad-8bdb6f7aa276"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:39:16 crc kubenswrapper[4752]: I0122 10:39:16.778296 4752 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a490ee99-f6e3-4b00-8fad-8bdb6f7aa276-util\") on node \"crc\" DevicePath \"\"" Jan 22 10:39:16 crc kubenswrapper[4752]: I0122 10:39:16.778345 4752 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a490ee99-f6e3-4b00-8fad-8bdb6f7aa276-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:39:17 crc kubenswrapper[4752]: I0122 10:39:17.171512 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/457d0a51f54525a262d5efe2d61a251845b3d41b95aab32496c4e56f8f2b9nz" event={"ID":"a490ee99-f6e3-4b00-8fad-8bdb6f7aa276","Type":"ContainerDied","Data":"96e6551d01bfcd3e5fccb9dd65211d70138245fa43cfd8ddb5fda35c81c07cbc"} Jan 22 10:39:17 crc kubenswrapper[4752]: I0122 10:39:17.171579 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96e6551d01bfcd3e5fccb9dd65211d70138245fa43cfd8ddb5fda35c81c07cbc" Jan 22 10:39:17 crc kubenswrapper[4752]: I0122 10:39:17.171608 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/457d0a51f54525a262d5efe2d61a251845b3d41b95aab32496c4e56f8f2b9nz" Jan 22 10:39:23 crc kubenswrapper[4752]: I0122 10:39:23.388823 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5fdb66b569-g4qwn"] Jan 22 10:39:23 crc kubenswrapper[4752]: E0122 10:39:23.389295 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a490ee99-f6e3-4b00-8fad-8bdb6f7aa276" containerName="pull" Jan 22 10:39:23 crc kubenswrapper[4752]: I0122 10:39:23.389306 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="a490ee99-f6e3-4b00-8fad-8bdb6f7aa276" containerName="pull" Jan 22 10:39:23 crc kubenswrapper[4752]: E0122 10:39:23.389316 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a490ee99-f6e3-4b00-8fad-8bdb6f7aa276" containerName="util" Jan 22 10:39:23 crc kubenswrapper[4752]: I0122 10:39:23.389321 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="a490ee99-f6e3-4b00-8fad-8bdb6f7aa276" containerName="util" Jan 22 10:39:23 crc kubenswrapper[4752]: E0122 10:39:23.389336 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a490ee99-f6e3-4b00-8fad-8bdb6f7aa276" containerName="extract" Jan 22 10:39:23 crc kubenswrapper[4752]: I0122 10:39:23.389342 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="a490ee99-f6e3-4b00-8fad-8bdb6f7aa276" containerName="extract" Jan 22 10:39:23 crc kubenswrapper[4752]: I0122 10:39:23.389446 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="a490ee99-f6e3-4b00-8fad-8bdb6f7aa276" containerName="extract" Jan 22 10:39:23 crc kubenswrapper[4752]: I0122 10:39:23.389816 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5fdb66b569-g4qwn" Jan 22 10:39:23 crc kubenswrapper[4752]: I0122 10:39:23.393213 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-d4gcj" Jan 22 10:39:23 crc kubenswrapper[4752]: I0122 10:39:23.472076 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5fdb66b569-g4qwn"] Jan 22 10:39:23 crc kubenswrapper[4752]: I0122 10:39:23.542698 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8svs8\" (UniqueName: \"kubernetes.io/projected/4aadd88c-714e-4060-a60d-529adf382d2c-kube-api-access-8svs8\") pod \"openstack-operator-controller-init-5fdb66b569-g4qwn\" (UID: \"4aadd88c-714e-4060-a60d-529adf382d2c\") " pod="openstack-operators/openstack-operator-controller-init-5fdb66b569-g4qwn" Jan 22 10:39:23 crc kubenswrapper[4752]: I0122 10:39:23.644761 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8svs8\" (UniqueName: \"kubernetes.io/projected/4aadd88c-714e-4060-a60d-529adf382d2c-kube-api-access-8svs8\") pod \"openstack-operator-controller-init-5fdb66b569-g4qwn\" (UID: \"4aadd88c-714e-4060-a60d-529adf382d2c\") " pod="openstack-operators/openstack-operator-controller-init-5fdb66b569-g4qwn" Jan 22 10:39:23 crc kubenswrapper[4752]: I0122 10:39:23.675655 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8svs8\" (UniqueName: \"kubernetes.io/projected/4aadd88c-714e-4060-a60d-529adf382d2c-kube-api-access-8svs8\") pod \"openstack-operator-controller-init-5fdb66b569-g4qwn\" (UID: \"4aadd88c-714e-4060-a60d-529adf382d2c\") " pod="openstack-operators/openstack-operator-controller-init-5fdb66b569-g4qwn" Jan 22 10:39:23 crc kubenswrapper[4752]: I0122 10:39:23.709269 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5fdb66b569-g4qwn" Jan 22 10:39:23 crc kubenswrapper[4752]: I0122 10:39:23.961428 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5fdb66b569-g4qwn"] Jan 22 10:39:23 crc kubenswrapper[4752]: W0122 10:39:23.971121 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4aadd88c_714e_4060_a60d_529adf382d2c.slice/crio-4eb72f6270008b14018b5f536e0c3f61d2ed2587b94592116d4343c73d8a7e0c WatchSource:0}: Error finding container 4eb72f6270008b14018b5f536e0c3f61d2ed2587b94592116d4343c73d8a7e0c: Status 404 returned error can't find the container with id 4eb72f6270008b14018b5f536e0c3f61d2ed2587b94592116d4343c73d8a7e0c Jan 22 10:39:24 crc kubenswrapper[4752]: I0122 10:39:24.247129 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5fdb66b569-g4qwn" event={"ID":"4aadd88c-714e-4060-a60d-529adf382d2c","Type":"ContainerStarted","Data":"4eb72f6270008b14018b5f536e0c3f61d2ed2587b94592116d4343c73d8a7e0c"} Jan 22 10:39:27 crc kubenswrapper[4752]: I0122 10:39:27.723420 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:39:27 crc kubenswrapper[4752]: I0122 10:39:27.723735 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:39:27 crc kubenswrapper[4752]: I0122 10:39:27.723785 4752 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 10:39:27 crc kubenswrapper[4752]: I0122 10:39:27.724189 4752 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"042ed95c4b3ff840b51322a9e98a655ce91eb46ad1b15d6ef52fd539d78d7a7d"} pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 10:39:27 crc kubenswrapper[4752]: I0122 10:39:27.724238 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" containerID="cri-o://042ed95c4b3ff840b51322a9e98a655ce91eb46ad1b15d6ef52fd539d78d7a7d" gracePeriod=600 Jan 22 10:39:28 crc kubenswrapper[4752]: I0122 10:39:28.274980 4752 generic.go:334] "Generic (PLEG): container finished" podID="eb8df70c-9474-4827-8831-f39fc6883d79" containerID="042ed95c4b3ff840b51322a9e98a655ce91eb46ad1b15d6ef52fd539d78d7a7d" exitCode=0 Jan 22 10:39:28 crc kubenswrapper[4752]: I0122 10:39:28.275050 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerDied","Data":"042ed95c4b3ff840b51322a9e98a655ce91eb46ad1b15d6ef52fd539d78d7a7d"} Jan 22 10:39:28 crc kubenswrapper[4752]: I0122 10:39:28.275353 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerStarted","Data":"2e87e3a6ca557c47aa1a29b28c97952e66d28f228a2a925e37de3714e751682c"} Jan 22 10:39:28 crc kubenswrapper[4752]: I0122 10:39:28.275380 4752 scope.go:117] "RemoveContainer" containerID="65d7aaf92a1adc89263932ce8b3a2116ed56843d8468de5cdaef20db861a025e" Jan 22 10:39:29 crc kubenswrapper[4752]: I0122 10:39:29.286586 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5fdb66b569-g4qwn" event={"ID":"4aadd88c-714e-4060-a60d-529adf382d2c","Type":"ContainerStarted","Data":"b398b2026c03010d40273eadc34103dd2efa41c6e6e988cca6241ffe7f7abf71"} Jan 22 10:39:29 crc kubenswrapper[4752]: I0122 10:39:29.287093 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5fdb66b569-g4qwn" Jan 22 10:39:29 crc kubenswrapper[4752]: I0122 10:39:29.323287 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5fdb66b569-g4qwn" podStartSLOduration=2.171651789 podStartE2EDuration="6.323262933s" podCreationTimestamp="2026-01-22 10:39:23 +0000 UTC" firstStartedPulling="2026-01-22 10:39:23.973827872 +0000 UTC m=+843.203770780" lastFinishedPulling="2026-01-22 10:39:28.125439016 +0000 UTC m=+847.355381924" observedRunningTime="2026-01-22 10:39:29.318385751 +0000 UTC m=+848.548328659" watchObservedRunningTime="2026-01-22 10:39:29.323262933 +0000 UTC m=+848.553205871" Jan 22 10:39:33 crc kubenswrapper[4752]: I0122 10:39:33.714374 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5fdb66b569-g4qwn" Jan 22 10:40:03 crc kubenswrapper[4752]: I0122 10:40:03.007350 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9gmm2"] Jan 22 10:40:03 crc kubenswrapper[4752]: I0122 10:40:03.009550 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9gmm2" Jan 22 10:40:03 crc kubenswrapper[4752]: I0122 10:40:03.023447 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9gmm2"] Jan 22 10:40:03 crc kubenswrapper[4752]: I0122 10:40:03.149554 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwzkr\" (UniqueName: \"kubernetes.io/projected/48e9665e-d388-4637-b628-34c7a8fc4357-kube-api-access-xwzkr\") pod \"community-operators-9gmm2\" (UID: \"48e9665e-d388-4637-b628-34c7a8fc4357\") " pod="openshift-marketplace/community-operators-9gmm2" Jan 22 10:40:03 crc kubenswrapper[4752]: I0122 10:40:03.150003 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48e9665e-d388-4637-b628-34c7a8fc4357-utilities\") pod \"community-operators-9gmm2\" (UID: \"48e9665e-d388-4637-b628-34c7a8fc4357\") " pod="openshift-marketplace/community-operators-9gmm2" Jan 22 10:40:03 crc kubenswrapper[4752]: I0122 10:40:03.150211 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48e9665e-d388-4637-b628-34c7a8fc4357-catalog-content\") pod \"community-operators-9gmm2\" (UID: \"48e9665e-d388-4637-b628-34c7a8fc4357\") " pod="openshift-marketplace/community-operators-9gmm2" Jan 22 10:40:03 crc kubenswrapper[4752]: I0122 10:40:03.251079 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48e9665e-d388-4637-b628-34c7a8fc4357-utilities\") pod \"community-operators-9gmm2\" (UID: \"48e9665e-d388-4637-b628-34c7a8fc4357\") " pod="openshift-marketplace/community-operators-9gmm2" Jan 22 10:40:03 crc kubenswrapper[4752]: I0122 10:40:03.251122 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48e9665e-d388-4637-b628-34c7a8fc4357-catalog-content\") pod \"community-operators-9gmm2\" (UID: \"48e9665e-d388-4637-b628-34c7a8fc4357\") " pod="openshift-marketplace/community-operators-9gmm2" Jan 22 10:40:03 crc kubenswrapper[4752]: I0122 10:40:03.251202 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwzkr\" (UniqueName: \"kubernetes.io/projected/48e9665e-d388-4637-b628-34c7a8fc4357-kube-api-access-xwzkr\") pod \"community-operators-9gmm2\" (UID: \"48e9665e-d388-4637-b628-34c7a8fc4357\") " pod="openshift-marketplace/community-operators-9gmm2" Jan 22 10:40:03 crc kubenswrapper[4752]: I0122 10:40:03.252055 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48e9665e-d388-4637-b628-34c7a8fc4357-utilities\") pod \"community-operators-9gmm2\" (UID: \"48e9665e-d388-4637-b628-34c7a8fc4357\") " pod="openshift-marketplace/community-operators-9gmm2" Jan 22 10:40:03 crc kubenswrapper[4752]: I0122 10:40:03.252263 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48e9665e-d388-4637-b628-34c7a8fc4357-catalog-content\") pod \"community-operators-9gmm2\" (UID: \"48e9665e-d388-4637-b628-34c7a8fc4357\") " pod="openshift-marketplace/community-operators-9gmm2" Jan 22 10:40:03 crc kubenswrapper[4752]: I0122 10:40:03.275746 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwzkr\" (UniqueName: \"kubernetes.io/projected/48e9665e-d388-4637-b628-34c7a8fc4357-kube-api-access-xwzkr\") pod \"community-operators-9gmm2\" (UID: \"48e9665e-d388-4637-b628-34c7a8fc4357\") " pod="openshift-marketplace/community-operators-9gmm2" Jan 22 10:40:03 crc kubenswrapper[4752]: I0122 10:40:03.387876 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9gmm2" Jan 22 10:40:03 crc kubenswrapper[4752]: I0122 10:40:03.728525 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9gmm2"] Jan 22 10:40:04 crc kubenswrapper[4752]: I0122 10:40:04.575064 4752 generic.go:334] "Generic (PLEG): container finished" podID="48e9665e-d388-4637-b628-34c7a8fc4357" containerID="2fdd59b50f120087910514e7e280f9b550d1334935b3136960ece9806106fe98" exitCode=0 Jan 22 10:40:04 crc kubenswrapper[4752]: I0122 10:40:04.575347 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gmm2" event={"ID":"48e9665e-d388-4637-b628-34c7a8fc4357","Type":"ContainerDied","Data":"2fdd59b50f120087910514e7e280f9b550d1334935b3136960ece9806106fe98"} Jan 22 10:40:04 crc kubenswrapper[4752]: I0122 10:40:04.575380 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gmm2" event={"ID":"48e9665e-d388-4637-b628-34c7a8fc4357","Type":"ContainerStarted","Data":"40b5fcfd65b7dec5b4abaa6ea9f459eb1a65c4e6b41c707889bf6cad0504fbcf"} Jan 22 10:40:04 crc kubenswrapper[4752]: I0122 10:40:04.577564 4752 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 10:40:05 crc kubenswrapper[4752]: I0122 10:40:05.586131 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gmm2" event={"ID":"48e9665e-d388-4637-b628-34c7a8fc4357","Type":"ContainerStarted","Data":"ac384a087713918af368ea0354af014fc0475237fa108046304072b84e8ca6af"} Jan 22 10:40:06 crc kubenswrapper[4752]: I0122 10:40:06.593109 4752 generic.go:334] "Generic (PLEG): container finished" podID="48e9665e-d388-4637-b628-34c7a8fc4357" containerID="ac384a087713918af368ea0354af014fc0475237fa108046304072b84e8ca6af" exitCode=0 Jan 22 10:40:06 crc kubenswrapper[4752]: I0122 10:40:06.593159 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gmm2" event={"ID":"48e9665e-d388-4637-b628-34c7a8fc4357","Type":"ContainerDied","Data":"ac384a087713918af368ea0354af014fc0475237fa108046304072b84e8ca6af"} Jan 22 10:40:07 crc kubenswrapper[4752]: I0122 10:40:07.601689 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gmm2" event={"ID":"48e9665e-d388-4637-b628-34c7a8fc4357","Type":"ContainerStarted","Data":"f6ca29422e8ead25b199f3c0854c1606f594038573c17af13f7a7994056512f3"} Jan 22 10:40:07 crc kubenswrapper[4752]: I0122 10:40:07.623955 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9gmm2" podStartSLOduration=3.237758753 podStartE2EDuration="5.623933153s" podCreationTimestamp="2026-01-22 10:40:02 +0000 UTC" firstStartedPulling="2026-01-22 10:40:04.577350129 +0000 UTC m=+883.807293037" lastFinishedPulling="2026-01-22 10:40:06.963524529 +0000 UTC m=+886.193467437" observedRunningTime="2026-01-22 10:40:07.621054175 +0000 UTC m=+886.850997083" watchObservedRunningTime="2026-01-22 10:40:07.623933153 +0000 UTC m=+886.853876071" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.496413 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nw62c"] Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.497538 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nw62c" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.499755 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-vfddq" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.531686 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-5w4l6"] Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.532571 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-5w4l6" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.534714 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-qx6b9" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.544648 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nw62c"] Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.554307 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-j2fbt"] Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.555393 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-j2fbt" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.557672 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-f82zs" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.570938 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-5w4l6"] Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.580520 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-j2fbt"] Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.602003 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-qtrfq"] Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.602783 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-qtrfq" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.605931 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-9gwvq" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.606579 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-kzvxf"] Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.607403 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-kzvxf" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.611213 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-2czrv" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.624821 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-kzvxf"] Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.633254 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-qtrfq"] Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.642220 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-62w75"] Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.643041 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-62w75" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.645342 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-f9qkw" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.659347 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b625\" (UniqueName: \"kubernetes.io/projected/a4e63413-7a90-4e3e-8380-000b611a7c9a-kube-api-access-9b625\") pod \"barbican-operator-controller-manager-59dd8b7cbf-nw62c\" (UID: \"a4e63413-7a90-4e3e-8380-000b611a7c9a\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nw62c" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.659420 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpgs6\" (UniqueName: \"kubernetes.io/projected/d60c858d-c3bc-4088-96c8-2fb5d865826a-kube-api-access-hpgs6\") pod \"designate-operator-controller-manager-b45d7bf98-j2fbt\" (UID: \"d60c858d-c3bc-4088-96c8-2fb5d865826a\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-j2fbt" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.659441 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kpvt\" (UniqueName: \"kubernetes.io/projected/488cb72d-e028-497a-983c-a0a47113e285-kube-api-access-9kpvt\") pod \"cinder-operator-controller-manager-69cf5d4557-5w4l6\" (UID: \"488cb72d-e028-497a-983c-a0a47113e285\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-5w4l6" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.666095 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-62w75"] Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.672809 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-6b5d9f997d-clxq7"] Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.673838 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6b5d9f997d-clxq7" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.677066 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-x4nnn" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.677290 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.677472 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-2vmkm"] Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.678269 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-2vmkm" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.679371 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-r5h7n" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.689122 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6b5d9f997d-clxq7"] Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.694716 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-2vmkm"] Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.702074 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-jqfns"] Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.706207 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-jqfns" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.707894 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-rd79k" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.710697 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-qmpws"] Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.711527 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qmpws" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.713638 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-v9hsh" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.718061 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-jqfns"] Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.728535 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-qmpws"] Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.735278 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-6pt5z"] Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.736200 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-6pt5z" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.743222 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-9b4bd" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.752116 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-6pt5z"] Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.760279 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpgs6\" (UniqueName: \"kubernetes.io/projected/d60c858d-c3bc-4088-96c8-2fb5d865826a-kube-api-access-hpgs6\") pod \"designate-operator-controller-manager-b45d7bf98-j2fbt\" (UID: \"d60c858d-c3bc-4088-96c8-2fb5d865826a\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-j2fbt" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.760490 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kpvt\" (UniqueName: \"kubernetes.io/projected/488cb72d-e028-497a-983c-a0a47113e285-kube-api-access-9kpvt\") pod \"cinder-operator-controller-manager-69cf5d4557-5w4l6\" (UID: \"488cb72d-e028-497a-983c-a0a47113e285\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-5w4l6" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.760573 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dp9l\" (UniqueName: \"kubernetes.io/projected/edb8c589-c1ae-4438-b32d-2dcdfec470f6-kube-api-access-5dp9l\") pod \"horizon-operator-controller-manager-77d5c5b54f-62w75\" (UID: \"edb8c589-c1ae-4438-b32d-2dcdfec470f6\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-62w75" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.760679 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scf9z\" (UniqueName: \"kubernetes.io/projected/e2b564d9-5b5d-42f9-84c1-0ff33b54ff22-kube-api-access-scf9z\") pod \"heat-operator-controller-manager-594c8c9d5d-kzvxf\" (UID: \"e2b564d9-5b5d-42f9-84c1-0ff33b54ff22\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-kzvxf" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.760764 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cklkl\" (UniqueName: \"kubernetes.io/projected/730b2e5f-dab1-4ce8-85ef-3d8cf62c38b6-kube-api-access-cklkl\") pod \"glance-operator-controller-manager-78fdd796fd-qtrfq\" (UID: \"730b2e5f-dab1-4ce8-85ef-3d8cf62c38b6\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-qtrfq" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.760877 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b625\" (UniqueName: \"kubernetes.io/projected/a4e63413-7a90-4e3e-8380-000b611a7c9a-kube-api-access-9b625\") pod \"barbican-operator-controller-manager-59dd8b7cbf-nw62c\" (UID: \"a4e63413-7a90-4e3e-8380-000b611a7c9a\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nw62c" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.761540 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-pqzp4"] Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.766986 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-knf8v"] Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.776090 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-pqzp4" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.778232 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-5ks2h"] Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.791929 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-knf8v" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.794159 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-5ks2h" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.796772 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-pqzp4"] Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.800743 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-d97g6" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.806670 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-2f2d9" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.812121 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b625\" (UniqueName: \"kubernetes.io/projected/a4e63413-7a90-4e3e-8380-000b611a7c9a-kube-api-access-9b625\") pod \"barbican-operator-controller-manager-59dd8b7cbf-nw62c\" (UID: \"a4e63413-7a90-4e3e-8380-000b611a7c9a\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nw62c" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.819414 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kpvt\" (UniqueName: \"kubernetes.io/projected/488cb72d-e028-497a-983c-a0a47113e285-kube-api-access-9kpvt\") pod \"cinder-operator-controller-manager-69cf5d4557-5w4l6\" (UID: \"488cb72d-e028-497a-983c-a0a47113e285\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-5w4l6" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.820902 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpgs6\" (UniqueName: \"kubernetes.io/projected/d60c858d-c3bc-4088-96c8-2fb5d865826a-kube-api-access-hpgs6\") pod \"designate-operator-controller-manager-b45d7bf98-j2fbt\" (UID: \"d60c858d-c3bc-4088-96c8-2fb5d865826a\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-j2fbt" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.859541 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-rvxjx" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.863903 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nw62c" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.864191 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-5w4l6" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.870205 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-knf8v"] Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.872241 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x77z\" (UniqueName: \"kubernetes.io/projected/a1ada694-ef02-43d6-bfa7-98c3437af5bc-kube-api-access-9x77z\") pod \"manila-operator-controller-manager-78c6999f6f-qmpws\" (UID: \"a1ada694-ef02-43d6-bfa7-98c3437af5bc\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qmpws" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.872283 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t5sx\" (UniqueName: \"kubernetes.io/projected/2d5d5722-0553-4d40-b618-a1c6d2e9f727-kube-api-access-8t5sx\") pod \"infra-operator-controller-manager-6b5d9f997d-clxq7\" (UID: \"2d5d5722-0553-4d40-b618-a1c6d2e9f727\") " pod="openstack-operators/infra-operator-controller-manager-6b5d9f997d-clxq7" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.872315 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwm6h\" (UniqueName: \"kubernetes.io/projected/12ce9837-825a-4ee8-a0b7-6d65a6c1766c-kube-api-access-fwm6h\") pod \"ironic-operator-controller-manager-69d6c9f5b8-2vmkm\" (UID: \"12ce9837-825a-4ee8-a0b7-6d65a6c1766c\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-2vmkm" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.872342 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dp9l\" (UniqueName: \"kubernetes.io/projected/edb8c589-c1ae-4438-b32d-2dcdfec470f6-kube-api-access-5dp9l\") pod \"horizon-operator-controller-manager-77d5c5b54f-62w75\" (UID: \"edb8c589-c1ae-4438-b32d-2dcdfec470f6\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-62w75" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.872366 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scf9z\" (UniqueName: \"kubernetes.io/projected/e2b564d9-5b5d-42f9-84c1-0ff33b54ff22-kube-api-access-scf9z\") pod \"heat-operator-controller-manager-594c8c9d5d-kzvxf\" (UID: \"e2b564d9-5b5d-42f9-84c1-0ff33b54ff22\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-kzvxf" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.872391 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv2hb\" (UniqueName: \"kubernetes.io/projected/79910a26-5ba2-4244-8880-31bd1322aec0-kube-api-access-tv2hb\") pod \"mariadb-operator-controller-manager-c87fff755-6pt5z\" (UID: \"79910a26-5ba2-4244-8880-31bd1322aec0\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-6pt5z" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.872407 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d5d5722-0553-4d40-b618-a1c6d2e9f727-cert\") pod \"infra-operator-controller-manager-6b5d9f997d-clxq7\" (UID: \"2d5d5722-0553-4d40-b618-a1c6d2e9f727\") " pod="openstack-operators/infra-operator-controller-manager-6b5d9f997d-clxq7" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.872427 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cklkl\" (UniqueName: \"kubernetes.io/projected/730b2e5f-dab1-4ce8-85ef-3d8cf62c38b6-kube-api-access-cklkl\") pod \"glance-operator-controller-manager-78fdd796fd-qtrfq\" (UID: \"730b2e5f-dab1-4ce8-85ef-3d8cf62c38b6\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-qtrfq" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.872459 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbb7t\" (UniqueName: \"kubernetes.io/projected/82cf7c76-cd84-425a-acc7-9c3d9842eb96-kube-api-access-zbb7t\") pod \"neutron-operator-controller-manager-5d8f59fb49-pqzp4\" (UID: \"82cf7c76-cd84-425a-acc7-9c3d9842eb96\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-pqzp4" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.872494 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prhsm\" (UniqueName: \"kubernetes.io/projected/af9949b6-e7f5-409e-8d78-012a9298233b-kube-api-access-prhsm\") pod \"keystone-operator-controller-manager-b8b6d4659-jqfns\" (UID: \"af9949b6-e7f5-409e-8d78-012a9298233b\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-jqfns" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.880241 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-j2fbt" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.905626 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-5ks2h"] Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.906162 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scf9z\" (UniqueName: \"kubernetes.io/projected/e2b564d9-5b5d-42f9-84c1-0ff33b54ff22-kube-api-access-scf9z\") pod \"heat-operator-controller-manager-594c8c9d5d-kzvxf\" (UID: \"e2b564d9-5b5d-42f9-84c1-0ff33b54ff22\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-kzvxf" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.916959 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dp9l\" (UniqueName: \"kubernetes.io/projected/edb8c589-c1ae-4438-b32d-2dcdfec470f6-kube-api-access-5dp9l\") pod \"horizon-operator-controller-manager-77d5c5b54f-62w75\" (UID: \"edb8c589-c1ae-4438-b32d-2dcdfec470f6\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-62w75" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.918794 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cklkl\" (UniqueName: \"kubernetes.io/projected/730b2e5f-dab1-4ce8-85ef-3d8cf62c38b6-kube-api-access-cklkl\") pod \"glance-operator-controller-manager-78fdd796fd-qtrfq\" (UID: \"730b2e5f-dab1-4ce8-85ef-3d8cf62c38b6\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-qtrfq" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.931432 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bthdw"] Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.934142 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bthdw" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.935553 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-qtrfq" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.937776 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-v7fw8" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.937972 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.942954 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-xvznb"] Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.943890 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xvznb" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.945415 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-bbvcp" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.959331 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-xvznb"] Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.963988 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-kzvxf" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.964941 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bthdw"] Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.971818 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-xrrtq"] Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.972885 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-xrrtq" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.976997 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-p5srq"] Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.979775 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-62w75" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.980158 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-p5srq" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.980466 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-56k7d" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.983172 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv2hb\" (UniqueName: \"kubernetes.io/projected/79910a26-5ba2-4244-8880-31bd1322aec0-kube-api-access-tv2hb\") pod \"mariadb-operator-controller-manager-c87fff755-6pt5z\" (UID: \"79910a26-5ba2-4244-8880-31bd1322aec0\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-6pt5z" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.984093 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d5d5722-0553-4d40-b618-a1c6d2e9f727-cert\") pod \"infra-operator-controller-manager-6b5d9f997d-clxq7\" (UID: \"2d5d5722-0553-4d40-b618-a1c6d2e9f727\") " pod="openstack-operators/infra-operator-controller-manager-6b5d9f997d-clxq7" Jan 22 10:40:11 crc kubenswrapper[4752]: E0122 10:40:11.984266 4752 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 10:40:11 crc kubenswrapper[4752]: E0122 10:40:11.984331 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d5d5722-0553-4d40-b618-a1c6d2e9f727-cert podName:2d5d5722-0553-4d40-b618-a1c6d2e9f727 nodeName:}" failed. No retries permitted until 2026-01-22 10:40:12.484311554 +0000 UTC m=+891.714254462 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2d5d5722-0553-4d40-b618-a1c6d2e9f727-cert") pod "infra-operator-controller-manager-6b5d9f997d-clxq7" (UID: "2d5d5722-0553-4d40-b618-a1c6d2e9f727") : secret "infra-operator-webhook-server-cert" not found Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.984606 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-drg9m" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.984911 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbb7t\" (UniqueName: \"kubernetes.io/projected/82cf7c76-cd84-425a-acc7-9c3d9842eb96-kube-api-access-zbb7t\") pod \"neutron-operator-controller-manager-5d8f59fb49-pqzp4\" (UID: \"82cf7c76-cd84-425a-acc7-9c3d9842eb96\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-pqzp4" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.984994 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prhsm\" (UniqueName: \"kubernetes.io/projected/af9949b6-e7f5-409e-8d78-012a9298233b-kube-api-access-prhsm\") pod \"keystone-operator-controller-manager-b8b6d4659-jqfns\" (UID: \"af9949b6-e7f5-409e-8d78-012a9298233b\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-jqfns" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.985042 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s47k2\" (UniqueName: \"kubernetes.io/projected/c11a96f0-c68c-4249-b1b3-f7e427444776-kube-api-access-s47k2\") pod \"nova-operator-controller-manager-6b8bc8d87d-knf8v\" (UID: \"c11a96f0-c68c-4249-b1b3-f7e427444776\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-knf8v" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.985112 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x77z\" (UniqueName: \"kubernetes.io/projected/a1ada694-ef02-43d6-bfa7-98c3437af5bc-kube-api-access-9x77z\") pod \"manila-operator-controller-manager-78c6999f6f-qmpws\" (UID: \"a1ada694-ef02-43d6-bfa7-98c3437af5bc\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qmpws" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.985135 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsnm8\" (UniqueName: \"kubernetes.io/projected/2fbc4a3c-0772-406d-8731-b49f80fa109c-kube-api-access-dsnm8\") pod \"octavia-operator-controller-manager-7bd9774b6-5ks2h\" (UID: \"2fbc4a3c-0772-406d-8731-b49f80fa109c\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-5ks2h" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.985164 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t5sx\" (UniqueName: \"kubernetes.io/projected/2d5d5722-0553-4d40-b618-a1c6d2e9f727-kube-api-access-8t5sx\") pod \"infra-operator-controller-manager-6b5d9f997d-clxq7\" (UID: \"2d5d5722-0553-4d40-b618-a1c6d2e9f727\") " pod="openstack-operators/infra-operator-controller-manager-6b5d9f997d-clxq7" Jan 22 10:40:11 crc kubenswrapper[4752]: I0122 10:40:11.985224 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwm6h\" (UniqueName: \"kubernetes.io/projected/12ce9837-825a-4ee8-a0b7-6d65a6c1766c-kube-api-access-fwm6h\") pod \"ironic-operator-controller-manager-69d6c9f5b8-2vmkm\" (UID: \"12ce9837-825a-4ee8-a0b7-6d65a6c1766c\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-2vmkm" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.007099 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwm6h\" (UniqueName: \"kubernetes.io/projected/12ce9837-825a-4ee8-a0b7-6d65a6c1766c-kube-api-access-fwm6h\") pod \"ironic-operator-controller-manager-69d6c9f5b8-2vmkm\" (UID: \"12ce9837-825a-4ee8-a0b7-6d65a6c1766c\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-2vmkm" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.007208 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv2hb\" (UniqueName: \"kubernetes.io/projected/79910a26-5ba2-4244-8880-31bd1322aec0-kube-api-access-tv2hb\") pod \"mariadb-operator-controller-manager-c87fff755-6pt5z\" (UID: \"79910a26-5ba2-4244-8880-31bd1322aec0\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-6pt5z" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.008114 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t5sx\" (UniqueName: \"kubernetes.io/projected/2d5d5722-0553-4d40-b618-a1c6d2e9f727-kube-api-access-8t5sx\") pod \"infra-operator-controller-manager-6b5d9f997d-clxq7\" (UID: \"2d5d5722-0553-4d40-b618-a1c6d2e9f727\") " pod="openstack-operators/infra-operator-controller-manager-6b5d9f997d-clxq7" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.009098 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-p5srq"] Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.010817 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prhsm\" (UniqueName: \"kubernetes.io/projected/af9949b6-e7f5-409e-8d78-012a9298233b-kube-api-access-prhsm\") pod \"keystone-operator-controller-manager-b8b6d4659-jqfns\" (UID: \"af9949b6-e7f5-409e-8d78-012a9298233b\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-jqfns" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.020573 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x77z\" (UniqueName: \"kubernetes.io/projected/a1ada694-ef02-43d6-bfa7-98c3437af5bc-kube-api-access-9x77z\") pod \"manila-operator-controller-manager-78c6999f6f-qmpws\" (UID: \"a1ada694-ef02-43d6-bfa7-98c3437af5bc\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qmpws" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.021604 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-xrrtq"] Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.024644 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-2vmkm" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.036777 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-jqfns" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.056773 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbb7t\" (UniqueName: \"kubernetes.io/projected/82cf7c76-cd84-425a-acc7-9c3d9842eb96-kube-api-access-zbb7t\") pod \"neutron-operator-controller-manager-5d8f59fb49-pqzp4\" (UID: \"82cf7c76-cd84-425a-acc7-9c3d9842eb96\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-pqzp4" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.060079 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qmpws" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.073516 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-7vg6x"] Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.074768 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-6pt5z" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.074977 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-7vg6x" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.077605 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-rf4gm" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.088572 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdts9\" (UniqueName: \"kubernetes.io/projected/4036fb44-c28e-4cb4-9284-209765d5a53d-kube-api-access-jdts9\") pod \"swift-operator-controller-manager-547cbdb99f-p5srq\" (UID: \"4036fb44-c28e-4cb4-9284-209765d5a53d\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-p5srq" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.088665 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s47k2\" (UniqueName: \"kubernetes.io/projected/c11a96f0-c68c-4249-b1b3-f7e427444776-kube-api-access-s47k2\") pod \"nova-operator-controller-manager-6b8bc8d87d-knf8v\" (UID: \"c11a96f0-c68c-4249-b1b3-f7e427444776\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-knf8v" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.088685 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shz7z\" (UniqueName: \"kubernetes.io/projected/fbffdae6-edb4-4812-904e-c1ef2783b477-kube-api-access-shz7z\") pod \"placement-operator-controller-manager-5d646b7d76-xrrtq\" (UID: \"fbffdae6-edb4-4812-904e-c1ef2783b477\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-xrrtq" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.088702 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vmpl\" (UniqueName: \"kubernetes.io/projected/70115a40-1b24-4548-bce1-41babb6186c4-kube-api-access-8vmpl\") pod \"ovn-operator-controller-manager-55db956ddc-xvznb\" (UID: \"70115a40-1b24-4548-bce1-41babb6186c4\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xvznb" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.088731 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsnm8\" (UniqueName: \"kubernetes.io/projected/2fbc4a3c-0772-406d-8731-b49f80fa109c-kube-api-access-dsnm8\") pod \"octavia-operator-controller-manager-7bd9774b6-5ks2h\" (UID: \"2fbc4a3c-0772-406d-8731-b49f80fa109c\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-5ks2h" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.088748 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btpdc\" (UniqueName: \"kubernetes.io/projected/ffb1d469-6623-4086-a40b-66f153b47bcf-kube-api-access-btpdc\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854bthdw\" (UID: \"ffb1d469-6623-4086-a40b-66f153b47bcf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bthdw" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.088778 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ffb1d469-6623-4086-a40b-66f153b47bcf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854bthdw\" (UID: \"ffb1d469-6623-4086-a40b-66f153b47bcf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bthdw" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.108098 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-7vg6x"] Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.108886 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s47k2\" (UniqueName: \"kubernetes.io/projected/c11a96f0-c68c-4249-b1b3-f7e427444776-kube-api-access-s47k2\") pod \"nova-operator-controller-manager-6b8bc8d87d-knf8v\" (UID: \"c11a96f0-c68c-4249-b1b3-f7e427444776\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-knf8v" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.113501 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsnm8\" (UniqueName: \"kubernetes.io/projected/2fbc4a3c-0772-406d-8731-b49f80fa109c-kube-api-access-dsnm8\") pod \"octavia-operator-controller-manager-7bd9774b6-5ks2h\" (UID: \"2fbc4a3c-0772-406d-8731-b49f80fa109c\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-5ks2h" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.150576 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-rpf8x"] Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.151741 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-rpf8x" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.153775 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-pp8d4" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.169284 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-rpf8x"] Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.172848 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-pqzp4" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.184097 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-knf8v" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.184754 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-8dc8cff97-5ff28"] Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.185885 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-8dc8cff97-5ff28" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.188199 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-dlc6s" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.189839 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shz7z\" (UniqueName: \"kubernetes.io/projected/fbffdae6-edb4-4812-904e-c1ef2783b477-kube-api-access-shz7z\") pod \"placement-operator-controller-manager-5d646b7d76-xrrtq\" (UID: \"fbffdae6-edb4-4812-904e-c1ef2783b477\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-xrrtq" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.189910 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vmpl\" (UniqueName: \"kubernetes.io/projected/70115a40-1b24-4548-bce1-41babb6186c4-kube-api-access-8vmpl\") pod \"ovn-operator-controller-manager-55db956ddc-xvznb\" (UID: \"70115a40-1b24-4548-bce1-41babb6186c4\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xvznb" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.189988 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btpdc\" (UniqueName: \"kubernetes.io/projected/ffb1d469-6623-4086-a40b-66f153b47bcf-kube-api-access-btpdc\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854bthdw\" (UID: \"ffb1d469-6623-4086-a40b-66f153b47bcf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bthdw" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.190051 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8d9p\" (UniqueName: \"kubernetes.io/projected/62ed39f1-2d59-480f-a71b-20dbdb1a346a-kube-api-access-z8d9p\") pod \"telemetry-operator-controller-manager-85cd9769bb-7vg6x\" (UID: \"62ed39f1-2d59-480f-a71b-20dbdb1a346a\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-7vg6x" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.190168 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ffb1d469-6623-4086-a40b-66f153b47bcf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854bthdw\" (UID: \"ffb1d469-6623-4086-a40b-66f153b47bcf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bthdw" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.190261 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdts9\" (UniqueName: \"kubernetes.io/projected/4036fb44-c28e-4cb4-9284-209765d5a53d-kube-api-access-jdts9\") pod \"swift-operator-controller-manager-547cbdb99f-p5srq\" (UID: \"4036fb44-c28e-4cb4-9284-209765d5a53d\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-p5srq" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.194404 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-8dc8cff97-5ff28"] Jan 22 10:40:12 crc kubenswrapper[4752]: E0122 10:40:12.194432 4752 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 10:40:12 crc kubenswrapper[4752]: E0122 10:40:12.194544 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffb1d469-6623-4086-a40b-66f153b47bcf-cert podName:ffb1d469-6623-4086-a40b-66f153b47bcf nodeName:}" failed. No retries permitted until 2026-01-22 10:40:12.694519721 +0000 UTC m=+891.924462699 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ffb1d469-6623-4086-a40b-66f153b47bcf-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854bthdw" (UID: "ffb1d469-6623-4086-a40b-66f153b47bcf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.210648 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-fd964b9dd-kvpr7"] Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.213169 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-fd964b9dd-kvpr7" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.221985 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vmpl\" (UniqueName: \"kubernetes.io/projected/70115a40-1b24-4548-bce1-41babb6186c4-kube-api-access-8vmpl\") pod \"ovn-operator-controller-manager-55db956ddc-xvznb\" (UID: \"70115a40-1b24-4548-bce1-41babb6186c4\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xvznb" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.222870 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.223156 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.224093 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-pxsv5" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.228109 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btpdc\" (UniqueName: \"kubernetes.io/projected/ffb1d469-6623-4086-a40b-66f153b47bcf-kube-api-access-btpdc\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854bthdw\" (UID: \"ffb1d469-6623-4086-a40b-66f153b47bcf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bthdw" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.232196 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shz7z\" (UniqueName: \"kubernetes.io/projected/fbffdae6-edb4-4812-904e-c1ef2783b477-kube-api-access-shz7z\") pod \"placement-operator-controller-manager-5d646b7d76-xrrtq\" (UID: \"fbffdae6-edb4-4812-904e-c1ef2783b477\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-xrrtq" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.234431 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdts9\" (UniqueName: \"kubernetes.io/projected/4036fb44-c28e-4cb4-9284-209765d5a53d-kube-api-access-jdts9\") pod \"swift-operator-controller-manager-547cbdb99f-p5srq\" (UID: \"4036fb44-c28e-4cb4-9284-209765d5a53d\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-p5srq" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.243581 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-fd964b9dd-kvpr7"] Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.271764 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-5ks2h" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.294667 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6mf4\" (UniqueName: \"kubernetes.io/projected/a0267408-c09a-435a-94f6-5613498da9ca-kube-api-access-c6mf4\") pod \"test-operator-controller-manager-69797bbcbd-rpf8x\" (UID: \"a0267408-c09a-435a-94f6-5613498da9ca\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-rpf8x" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.294763 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwtll\" (UniqueName: \"kubernetes.io/projected/36859a98-81f1-4ad5-aceb-b0013bf8aa42-kube-api-access-fwtll\") pod \"watcher-operator-controller-manager-8dc8cff97-5ff28\" (UID: \"36859a98-81f1-4ad5-aceb-b0013bf8aa42\") " pod="openstack-operators/watcher-operator-controller-manager-8dc8cff97-5ff28" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.294824 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8d9p\" (UniqueName: \"kubernetes.io/projected/62ed39f1-2d59-480f-a71b-20dbdb1a346a-kube-api-access-z8d9p\") pod \"telemetry-operator-controller-manager-85cd9769bb-7vg6x\" (UID: \"62ed39f1-2d59-480f-a71b-20dbdb1a346a\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-7vg6x" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.295549 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vfqwz"] Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.308813 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vfqwz" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.319313 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-gmdf7" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.348712 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xvznb" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.373880 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-xrrtq" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.376163 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-p5srq" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.378016 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vfqwz"] Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.388189 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8d9p\" (UniqueName: \"kubernetes.io/projected/62ed39f1-2d59-480f-a71b-20dbdb1a346a-kube-api-access-z8d9p\") pod \"telemetry-operator-controller-manager-85cd9769bb-7vg6x\" (UID: \"62ed39f1-2d59-480f-a71b-20dbdb1a346a\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-7vg6x" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.400511 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwtll\" (UniqueName: \"kubernetes.io/projected/36859a98-81f1-4ad5-aceb-b0013bf8aa42-kube-api-access-fwtll\") pod \"watcher-operator-controller-manager-8dc8cff97-5ff28\" (UID: \"36859a98-81f1-4ad5-aceb-b0013bf8aa42\") " pod="openstack-operators/watcher-operator-controller-manager-8dc8cff97-5ff28" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.400573 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-metrics-certs\") pod \"openstack-operator-controller-manager-fd964b9dd-kvpr7\" (UID: \"7fd4a737-55cc-447b-a14a-e5f46b1b392d\") " pod="openstack-operators/openstack-operator-controller-manager-fd964b9dd-kvpr7" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.400676 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkl7z\" (UniqueName: \"kubernetes.io/projected/7fd4a737-55cc-447b-a14a-e5f46b1b392d-kube-api-access-pkl7z\") pod \"openstack-operator-controller-manager-fd964b9dd-kvpr7\" (UID: \"7fd4a737-55cc-447b-a14a-e5f46b1b392d\") " pod="openstack-operators/openstack-operator-controller-manager-fd964b9dd-kvpr7" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.400710 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6mf4\" (UniqueName: \"kubernetes.io/projected/a0267408-c09a-435a-94f6-5613498da9ca-kube-api-access-c6mf4\") pod \"test-operator-controller-manager-69797bbcbd-rpf8x\" (UID: \"a0267408-c09a-435a-94f6-5613498da9ca\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-rpf8x" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.400733 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-webhook-certs\") pod \"openstack-operator-controller-manager-fd964b9dd-kvpr7\" (UID: \"7fd4a737-55cc-447b-a14a-e5f46b1b392d\") " pod="openstack-operators/openstack-operator-controller-manager-fd964b9dd-kvpr7" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.400760 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8gf4\" (UniqueName: \"kubernetes.io/projected/ec51e290-a33a-47de-a766-85507801ff1b-kube-api-access-c8gf4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vfqwz\" (UID: \"ec51e290-a33a-47de-a766-85507801ff1b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vfqwz" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.408380 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-7vg6x" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.417803 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6mf4\" (UniqueName: \"kubernetes.io/projected/a0267408-c09a-435a-94f6-5613498da9ca-kube-api-access-c6mf4\") pod \"test-operator-controller-manager-69797bbcbd-rpf8x\" (UID: \"a0267408-c09a-435a-94f6-5613498da9ca\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-rpf8x" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.418962 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwtll\" (UniqueName: \"kubernetes.io/projected/36859a98-81f1-4ad5-aceb-b0013bf8aa42-kube-api-access-fwtll\") pod \"watcher-operator-controller-manager-8dc8cff97-5ff28\" (UID: \"36859a98-81f1-4ad5-aceb-b0013bf8aa42\") " pod="openstack-operators/watcher-operator-controller-manager-8dc8cff97-5ff28" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.502505 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-metrics-certs\") pod \"openstack-operator-controller-manager-fd964b9dd-kvpr7\" (UID: \"7fd4a737-55cc-447b-a14a-e5f46b1b392d\") " pod="openstack-operators/openstack-operator-controller-manager-fd964b9dd-kvpr7" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.502649 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d5d5722-0553-4d40-b618-a1c6d2e9f727-cert\") pod \"infra-operator-controller-manager-6b5d9f997d-clxq7\" (UID: \"2d5d5722-0553-4d40-b618-a1c6d2e9f727\") " pod="openstack-operators/infra-operator-controller-manager-6b5d9f997d-clxq7" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.502679 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkl7z\" (UniqueName: \"kubernetes.io/projected/7fd4a737-55cc-447b-a14a-e5f46b1b392d-kube-api-access-pkl7z\") pod \"openstack-operator-controller-manager-fd964b9dd-kvpr7\" (UID: \"7fd4a737-55cc-447b-a14a-e5f46b1b392d\") " pod="openstack-operators/openstack-operator-controller-manager-fd964b9dd-kvpr7" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.502729 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-webhook-certs\") pod \"openstack-operator-controller-manager-fd964b9dd-kvpr7\" (UID: \"7fd4a737-55cc-447b-a14a-e5f46b1b392d\") " pod="openstack-operators/openstack-operator-controller-manager-fd964b9dd-kvpr7" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.502753 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8gf4\" (UniqueName: \"kubernetes.io/projected/ec51e290-a33a-47de-a766-85507801ff1b-kube-api-access-c8gf4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vfqwz\" (UID: \"ec51e290-a33a-47de-a766-85507801ff1b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vfqwz" Jan 22 10:40:12 crc kubenswrapper[4752]: E0122 10:40:12.502804 4752 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 10:40:12 crc kubenswrapper[4752]: E0122 10:40:12.502891 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-metrics-certs podName:7fd4a737-55cc-447b-a14a-e5f46b1b392d nodeName:}" failed. No retries permitted until 2026-01-22 10:40:13.002872678 +0000 UTC m=+892.232815586 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-metrics-certs") pod "openstack-operator-controller-manager-fd964b9dd-kvpr7" (UID: "7fd4a737-55cc-447b-a14a-e5f46b1b392d") : secret "metrics-server-cert" not found Jan 22 10:40:12 crc kubenswrapper[4752]: E0122 10:40:12.504264 4752 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 10:40:12 crc kubenswrapper[4752]: E0122 10:40:12.505889 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d5d5722-0553-4d40-b618-a1c6d2e9f727-cert podName:2d5d5722-0553-4d40-b618-a1c6d2e9f727 nodeName:}" failed. No retries permitted until 2026-01-22 10:40:13.505862739 +0000 UTC m=+892.735805647 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2d5d5722-0553-4d40-b618-a1c6d2e9f727-cert") pod "infra-operator-controller-manager-6b5d9f997d-clxq7" (UID: "2d5d5722-0553-4d40-b618-a1c6d2e9f727") : secret "infra-operator-webhook-server-cert" not found Jan 22 10:40:12 crc kubenswrapper[4752]: E0122 10:40:12.504776 4752 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 10:40:12 crc kubenswrapper[4752]: E0122 10:40:12.505930 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-webhook-certs podName:7fd4a737-55cc-447b-a14a-e5f46b1b392d nodeName:}" failed. No retries permitted until 2026-01-22 10:40:13.005924231 +0000 UTC m=+892.235867139 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-webhook-certs") pod "openstack-operator-controller-manager-fd964b9dd-kvpr7" (UID: "7fd4a737-55cc-447b-a14a-e5f46b1b392d") : secret "webhook-server-cert" not found Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.526684 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkl7z\" (UniqueName: \"kubernetes.io/projected/7fd4a737-55cc-447b-a14a-e5f46b1b392d-kube-api-access-pkl7z\") pod \"openstack-operator-controller-manager-fd964b9dd-kvpr7\" (UID: \"7fd4a737-55cc-447b-a14a-e5f46b1b392d\") " pod="openstack-operators/openstack-operator-controller-manager-fd964b9dd-kvpr7" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.537377 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8gf4\" (UniqueName: \"kubernetes.io/projected/ec51e290-a33a-47de-a766-85507801ff1b-kube-api-access-c8gf4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vfqwz\" (UID: \"ec51e290-a33a-47de-a766-85507801ff1b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vfqwz" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.585467 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-rpf8x" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.628444 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-8dc8cff97-5ff28" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.707149 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ffb1d469-6623-4086-a40b-66f153b47bcf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854bthdw\" (UID: \"ffb1d469-6623-4086-a40b-66f153b47bcf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bthdw" Jan 22 10:40:12 crc kubenswrapper[4752]: E0122 10:40:12.707336 4752 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 10:40:12 crc kubenswrapper[4752]: E0122 10:40:12.707382 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffb1d469-6623-4086-a40b-66f153b47bcf-cert podName:ffb1d469-6623-4086-a40b-66f153b47bcf nodeName:}" failed. No retries permitted until 2026-01-22 10:40:13.707368271 +0000 UTC m=+892.937311179 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ffb1d469-6623-4086-a40b-66f153b47bcf-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854bthdw" (UID: "ffb1d469-6623-4086-a40b-66f153b47bcf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.762596 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vfqwz" Jan 22 10:40:12 crc kubenswrapper[4752]: I0122 10:40:12.854199 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-5w4l6"] Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.011355 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-metrics-certs\") pod \"openstack-operator-controller-manager-fd964b9dd-kvpr7\" (UID: \"7fd4a737-55cc-447b-a14a-e5f46b1b392d\") " pod="openstack-operators/openstack-operator-controller-manager-fd964b9dd-kvpr7" Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.011470 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-webhook-certs\") pod \"openstack-operator-controller-manager-fd964b9dd-kvpr7\" (UID: \"7fd4a737-55cc-447b-a14a-e5f46b1b392d\") " pod="openstack-operators/openstack-operator-controller-manager-fd964b9dd-kvpr7" Jan 22 10:40:13 crc kubenswrapper[4752]: E0122 10:40:13.011560 4752 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 10:40:13 crc kubenswrapper[4752]: E0122 10:40:13.011623 4752 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 10:40:13 crc kubenswrapper[4752]: E0122 10:40:13.011673 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-webhook-certs podName:7fd4a737-55cc-447b-a14a-e5f46b1b392d nodeName:}" failed. No retries permitted until 2026-01-22 10:40:14.011656578 +0000 UTC m=+893.241599486 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-webhook-certs") pod "openstack-operator-controller-manager-fd964b9dd-kvpr7" (UID: "7fd4a737-55cc-447b-a14a-e5f46b1b392d") : secret "webhook-server-cert" not found Jan 22 10:40:13 crc kubenswrapper[4752]: E0122 10:40:13.011711 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-metrics-certs podName:7fd4a737-55cc-447b-a14a-e5f46b1b392d nodeName:}" failed. No retries permitted until 2026-01-22 10:40:14.011680889 +0000 UTC m=+893.241623827 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-metrics-certs") pod "openstack-operator-controller-manager-fd964b9dd-kvpr7" (UID: "7fd4a737-55cc-447b-a14a-e5f46b1b392d") : secret "metrics-server-cert" not found Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.290713 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-jqfns"] Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.301720 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nw62c"] Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.320054 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-j2fbt"] Jan 22 10:40:13 crc kubenswrapper[4752]: W0122 10:40:13.323237 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf9949b6_e7f5_409e_8d78_012a9298233b.slice/crio-fd781f0ca875e223a407b458c65342584c07c8fbe482bf259f88a1a540a29633 WatchSource:0}: Error finding container fd781f0ca875e223a407b458c65342584c07c8fbe482bf259f88a1a540a29633: Status 404 returned error can't find the container with id fd781f0ca875e223a407b458c65342584c07c8fbe482bf259f88a1a540a29633 Jan 22 10:40:13 crc kubenswrapper[4752]: W0122 10:40:13.327997 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd60c858d_c3bc_4088_96c8_2fb5d865826a.slice/crio-bdccc37ad327b4e50d0bfe36d4845486438a8b6a3572c4900da7368cc06797f2 WatchSource:0}: Error finding container bdccc37ad327b4e50d0bfe36d4845486438a8b6a3572c4900da7368cc06797f2: Status 404 returned error can't find the container with id bdccc37ad327b4e50d0bfe36d4845486438a8b6a3572c4900da7368cc06797f2 Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.330538 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-kzvxf"] Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.337088 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-knf8v"] Jan 22 10:40:13 crc kubenswrapper[4752]: W0122 10:40:13.339065 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedb8c589_c1ae_4438_b32d_2dcdfec470f6.slice/crio-7f8f8e01357c72e36ed1940804c46b38d7e7ec7f1e8bb6916ffa5a0fbed53449 WatchSource:0}: Error finding container 7f8f8e01357c72e36ed1940804c46b38d7e7ec7f1e8bb6916ffa5a0fbed53449: Status 404 returned error can't find the container with id 7f8f8e01357c72e36ed1940804c46b38d7e7ec7f1e8bb6916ffa5a0fbed53449 Jan 22 10:40:13 crc kubenswrapper[4752]: W0122 10:40:13.343178 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4e63413_7a90_4e3e_8380_000b611a7c9a.slice/crio-b3494bcc34920df09aafe365e4f9040188925b164a12f657c64b9dc8ed9c0472 WatchSource:0}: Error finding container b3494bcc34920df09aafe365e4f9040188925b164a12f657c64b9dc8ed9c0472: Status 404 returned error can't find the container with id b3494bcc34920df09aafe365e4f9040188925b164a12f657c64b9dc8ed9c0472 Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.344051 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-qtrfq"] Jan 22 10:40:13 crc kubenswrapper[4752]: W0122 10:40:13.347106 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12ce9837_825a_4ee8_a0b7_6d65a6c1766c.slice/crio-29e3b66eb4ba2ce1304723f2046ffcaf6d3ccd0fc34755b4858340ba964b89c0 WatchSource:0}: Error finding container 29e3b66eb4ba2ce1304723f2046ffcaf6d3ccd0fc34755b4858340ba964b89c0: Status 404 returned error can't find the container with id 29e3b66eb4ba2ce1304723f2046ffcaf6d3ccd0fc34755b4858340ba964b89c0 Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.350656 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-62w75"] Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.357828 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-6pt5z"] Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.362080 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-2vmkm"] Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.381955 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-xrrtq"] Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.389127 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9gmm2" Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.390001 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9gmm2" Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.395875 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-qmpws"] Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.417956 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-xvznb"] Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.426371 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-pqzp4"] Jan 22 10:40:13 crc kubenswrapper[4752]: W0122 10:40:13.434986 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82cf7c76_cd84_425a_acc7_9c3d9842eb96.slice/crio-f34081dd458caf27745d1d9f50ceaacfdeb151d1cd3ff6417ad57ca23417f321 WatchSource:0}: Error finding container f34081dd458caf27745d1d9f50ceaacfdeb151d1cd3ff6417ad57ca23417f321: Status 404 returned error can't find the container with id f34081dd458caf27745d1d9f50ceaacfdeb151d1cd3ff6417ad57ca23417f321 Jan 22 10:40:13 crc kubenswrapper[4752]: E0122 10:40:13.435247 4752 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9x77z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-qmpws_openstack-operators(a1ada694-ef02-43d6-bfa7-98c3437af5bc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 10:40:13 crc kubenswrapper[4752]: E0122 10:40:13.436486 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qmpws" podUID="a1ada694-ef02-43d6-bfa7-98c3437af5bc" Jan 22 10:40:13 crc kubenswrapper[4752]: E0122 10:40:13.440657 4752 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:b57d65d2a968705b9067192a7cb33bd4a12489db87e1d05de78c076f2062cab4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zbb7t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5d8f59fb49-pqzp4_openstack-operators(82cf7c76-cd84-425a-acc7-9c3d9842eb96): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 10:40:13 crc kubenswrapper[4752]: E0122 10:40:13.441778 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-pqzp4" podUID="82cf7c76-cd84-425a-acc7-9c3d9842eb96" Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.445245 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-5ks2h"] Jan 22 10:40:13 crc kubenswrapper[4752]: E0122 10:40:13.451582 4752 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8vmpl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-xvznb_openstack-operators(70115a40-1b24-4548-bce1-41babb6186c4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 10:40:13 crc kubenswrapper[4752]: E0122 10:40:13.452159 4752 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jdts9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-p5srq_openstack-operators(4036fb44-c28e-4cb4-9284-209765d5a53d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 10:40:13 crc kubenswrapper[4752]: E0122 10:40:13.452672 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xvznb" podUID="70115a40-1b24-4548-bce1-41babb6186c4" Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.453004 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9gmm2" Jan 22 10:40:13 crc kubenswrapper[4752]: E0122 10:40:13.453257 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-p5srq" podUID="4036fb44-c28e-4cb4-9284-209765d5a53d" Jan 22 10:40:13 crc kubenswrapper[4752]: W0122 10:40:13.455905 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62ed39f1_2d59_480f_a71b_20dbdb1a346a.slice/crio-e252ff9459d3fe398fa6d3f2e0a093becb3e5a2c6d4d237c804bc13b0149e53b WatchSource:0}: Error finding container e252ff9459d3fe398fa6d3f2e0a093becb3e5a2c6d4d237c804bc13b0149e53b: Status 404 returned error can't find the container with id e252ff9459d3fe398fa6d3f2e0a093becb3e5a2c6d4d237c804bc13b0149e53b Jan 22 10:40:13 crc kubenswrapper[4752]: E0122 10:40:13.456007 4752 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dsnm8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7bd9774b6-5ks2h_openstack-operators(2fbc4a3c-0772-406d-8731-b49f80fa109c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 10:40:13 crc kubenswrapper[4752]: E0122 10:40:13.457059 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-5ks2h" podUID="2fbc4a3c-0772-406d-8731-b49f80fa109c" Jan 22 10:40:13 crc kubenswrapper[4752]: E0122 10:40:13.457654 4752 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z8d9p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-7vg6x_openstack-operators(62ed39f1-2d59-480f-a71b-20dbdb1a346a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 10:40:13 crc kubenswrapper[4752]: E0122 10:40:13.459020 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-7vg6x" podUID="62ed39f1-2d59-480f-a71b-20dbdb1a346a" Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.470251 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-p5srq"] Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.479650 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-7vg6x"] Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.524424 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d5d5722-0553-4d40-b618-a1c6d2e9f727-cert\") pod \"infra-operator-controller-manager-6b5d9f997d-clxq7\" (UID: \"2d5d5722-0553-4d40-b618-a1c6d2e9f727\") " pod="openstack-operators/infra-operator-controller-manager-6b5d9f997d-clxq7" Jan 22 10:40:13 crc kubenswrapper[4752]: E0122 10:40:13.524566 4752 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 10:40:13 crc kubenswrapper[4752]: E0122 10:40:13.524612 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d5d5722-0553-4d40-b618-a1c6d2e9f727-cert podName:2d5d5722-0553-4d40-b618-a1c6d2e9f727 nodeName:}" failed. No retries permitted until 2026-01-22 10:40:15.52459746 +0000 UTC m=+894.754540368 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2d5d5722-0553-4d40-b618-a1c6d2e9f727-cert") pod "infra-operator-controller-manager-6b5d9f997d-clxq7" (UID: "2d5d5722-0553-4d40-b618-a1c6d2e9f727") : secret "infra-operator-webhook-server-cert" not found Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.643313 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-8dc8cff97-5ff28"] Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.649488 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vfqwz"] Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.656023 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-7vg6x" event={"ID":"62ed39f1-2d59-480f-a71b-20dbdb1a346a","Type":"ContainerStarted","Data":"e252ff9459d3fe398fa6d3f2e0a093becb3e5a2c6d4d237c804bc13b0149e53b"} Jan 22 10:40:13 crc kubenswrapper[4752]: E0122 10:40:13.662939 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-7vg6x" podUID="62ed39f1-2d59-480f-a71b-20dbdb1a346a" Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.687546 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-rpf8x"] Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.687915 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-qtrfq" event={"ID":"730b2e5f-dab1-4ce8-85ef-3d8cf62c38b6","Type":"ContainerStarted","Data":"64bfa50c209765971be7480a3151221193d18a5a79b283998d6341a83fccafcd"} Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.696145 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-knf8v" event={"ID":"c11a96f0-c68c-4249-b1b3-f7e427444776","Type":"ContainerStarted","Data":"8c2df6ed73cfb608ed3392917dbaac32304b7999d612d0dda8afd11e8f87412a"} Jan 22 10:40:13 crc kubenswrapper[4752]: W0122 10:40:13.700933 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec51e290_a33a_47de_a766_85507801ff1b.slice/crio-b9258d1c399f14da586f43d5f2a0bb89aa14fb8068a768b83e565d70b249f716 WatchSource:0}: Error finding container b9258d1c399f14da586f43d5f2a0bb89aa14fb8068a768b83e565d70b249f716: Status 404 returned error can't find the container with id b9258d1c399f14da586f43d5f2a0bb89aa14fb8068a768b83e565d70b249f716 Jan 22 10:40:13 crc kubenswrapper[4752]: W0122 10:40:13.702724 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0267408_c09a_435a_94f6_5613498da9ca.slice/crio-f8263e752f715fd11181044af9079e80e59522af141f61aac283251d8e81dac1 WatchSource:0}: Error finding container f8263e752f715fd11181044af9079e80e59522af141f61aac283251d8e81dac1: Status 404 returned error can't find the container with id f8263e752f715fd11181044af9079e80e59522af141f61aac283251d8e81dac1 Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.702983 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-62w75" event={"ID":"edb8c589-c1ae-4438-b32d-2dcdfec470f6","Type":"ContainerStarted","Data":"7f8f8e01357c72e36ed1940804c46b38d7e7ec7f1e8bb6916ffa5a0fbed53449"} Jan 22 10:40:13 crc kubenswrapper[4752]: E0122 10:40:13.704082 4752 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c8gf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-vfqwz_openstack-operators(ec51e290-a33a-47de-a766-85507801ff1b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.705271 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-pqzp4" event={"ID":"82cf7c76-cd84-425a-acc7-9c3d9842eb96","Type":"ContainerStarted","Data":"f34081dd458caf27745d1d9f50ceaacfdeb151d1cd3ff6417ad57ca23417f321"} Jan 22 10:40:13 crc kubenswrapper[4752]: E0122 10:40:13.705665 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vfqwz" podUID="ec51e290-a33a-47de-a766-85507801ff1b" Jan 22 10:40:13 crc kubenswrapper[4752]: E0122 10:40:13.706065 4752 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c6mf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-rpf8x_openstack-operators(a0267408-c09a-435a-94f6-5613498da9ca): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 10:40:13 crc kubenswrapper[4752]: E0122 10:40:13.707231 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:b57d65d2a968705b9067192a7cb33bd4a12489db87e1d05de78c076f2062cab4\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-pqzp4" podUID="82cf7c76-cd84-425a-acc7-9c3d9842eb96" Jan 22 10:40:13 crc kubenswrapper[4752]: E0122 10:40:13.707289 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-rpf8x" podUID="a0267408-c09a-435a-94f6-5613498da9ca" Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.707953 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qmpws" event={"ID":"a1ada694-ef02-43d6-bfa7-98c3437af5bc","Type":"ContainerStarted","Data":"ff28b592a60de5480175e81b24b9e2a8948086aeb51cb9a301e0d62e983459b0"} Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.709049 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-p5srq" event={"ID":"4036fb44-c28e-4cb4-9284-209765d5a53d","Type":"ContainerStarted","Data":"3f5854e84d97e5f30cdbe3f3cb43a72176fd185f0b8b95cec8769347b7294737"} Jan 22 10:40:13 crc kubenswrapper[4752]: E0122 10:40:13.709635 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qmpws" podUID="a1ada694-ef02-43d6-bfa7-98c3437af5bc" Jan 22 10:40:13 crc kubenswrapper[4752]: E0122 10:40:13.709739 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-p5srq" podUID="4036fb44-c28e-4cb4-9284-209765d5a53d" Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.710381 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-2vmkm" event={"ID":"12ce9837-825a-4ee8-a0b7-6d65a6c1766c","Type":"ContainerStarted","Data":"29e3b66eb4ba2ce1304723f2046ffcaf6d3ccd0fc34755b4858340ba964b89c0"} Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.712612 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-6pt5z" event={"ID":"79910a26-5ba2-4244-8880-31bd1322aec0","Type":"ContainerStarted","Data":"437867d88e98313b5d75464c0cbcc2c2292b5818da3e0b5f8f3748ddcf138cc7"} Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.714289 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-xrrtq" event={"ID":"fbffdae6-edb4-4812-904e-c1ef2783b477","Type":"ContainerStarted","Data":"fb3114b91e6e007af5ac1a1771634fcbcc4f44f59fc5c0295b94f0ce5792ed22"} Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.715517 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-kzvxf" event={"ID":"e2b564d9-5b5d-42f9-84c1-0ff33b54ff22","Type":"ContainerStarted","Data":"d9a077ee100e28c49b8a982fc1334f6cfb78f68914fec3cf5ea17d2942e70c80"} Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.716667 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-jqfns" event={"ID":"af9949b6-e7f5-409e-8d78-012a9298233b","Type":"ContainerStarted","Data":"fd781f0ca875e223a407b458c65342584c07c8fbe482bf259f88a1a540a29633"} Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.717832 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nw62c" event={"ID":"a4e63413-7a90-4e3e-8380-000b611a7c9a","Type":"ContainerStarted","Data":"b3494bcc34920df09aafe365e4f9040188925b164a12f657c64b9dc8ed9c0472"} Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.721112 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-j2fbt" event={"ID":"d60c858d-c3bc-4088-96c8-2fb5d865826a","Type":"ContainerStarted","Data":"bdccc37ad327b4e50d0bfe36d4845486438a8b6a3572c4900da7368cc06797f2"} Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.722775 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-5ks2h" event={"ID":"2fbc4a3c-0772-406d-8731-b49f80fa109c","Type":"ContainerStarted","Data":"05f8fda5aa17cf26f2faf729eb06f08ad7f92e19bd5c77cf78855d2d7baea23a"} Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.724034 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xvznb" event={"ID":"70115a40-1b24-4548-bce1-41babb6186c4","Type":"ContainerStarted","Data":"765b2e1915fd5549a7241ed816c95161a9b7ed84337fba94676451dca9c49c1a"} Jan 22 10:40:13 crc kubenswrapper[4752]: E0122 10:40:13.725893 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xvznb" podUID="70115a40-1b24-4548-bce1-41babb6186c4" Jan 22 10:40:13 crc kubenswrapper[4752]: E0122 10:40:13.726309 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-5ks2h" podUID="2fbc4a3c-0772-406d-8731-b49f80fa109c" Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.726927 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-5w4l6" event={"ID":"488cb72d-e028-497a-983c-a0a47113e285","Type":"ContainerStarted","Data":"4ab752c373bcecc7c54522ef8f9ab5cbec6c2e58a04b8e7ea5da0e57c5f1813f"} Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.727530 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ffb1d469-6623-4086-a40b-66f153b47bcf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854bthdw\" (UID: \"ffb1d469-6623-4086-a40b-66f153b47bcf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bthdw" Jan 22 10:40:13 crc kubenswrapper[4752]: E0122 10:40:13.727774 4752 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 10:40:13 crc kubenswrapper[4752]: E0122 10:40:13.727816 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffb1d469-6623-4086-a40b-66f153b47bcf-cert podName:ffb1d469-6623-4086-a40b-66f153b47bcf nodeName:}" failed. No retries permitted until 2026-01-22 10:40:15.727803488 +0000 UTC m=+894.957746396 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ffb1d469-6623-4086-a40b-66f153b47bcf-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854bthdw" (UID: "ffb1d469-6623-4086-a40b-66f153b47bcf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.773956 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9gmm2" Jan 22 10:40:13 crc kubenswrapper[4752]: I0122 10:40:13.839466 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9gmm2"] Jan 22 10:40:14 crc kubenswrapper[4752]: I0122 10:40:14.032463 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-webhook-certs\") pod \"openstack-operator-controller-manager-fd964b9dd-kvpr7\" (UID: \"7fd4a737-55cc-447b-a14a-e5f46b1b392d\") " pod="openstack-operators/openstack-operator-controller-manager-fd964b9dd-kvpr7" Jan 22 10:40:14 crc kubenswrapper[4752]: I0122 10:40:14.032635 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-metrics-certs\") pod \"openstack-operator-controller-manager-fd964b9dd-kvpr7\" (UID: \"7fd4a737-55cc-447b-a14a-e5f46b1b392d\") " pod="openstack-operators/openstack-operator-controller-manager-fd964b9dd-kvpr7" Jan 22 10:40:14 crc kubenswrapper[4752]: E0122 10:40:14.032847 4752 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 10:40:14 crc kubenswrapper[4752]: E0122 10:40:14.032947 4752 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 10:40:14 crc kubenswrapper[4752]: E0122 10:40:14.032962 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-metrics-certs podName:7fd4a737-55cc-447b-a14a-e5f46b1b392d nodeName:}" failed. No retries permitted until 2026-01-22 10:40:16.032940388 +0000 UTC m=+895.262883306 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-metrics-certs") pod "openstack-operator-controller-manager-fd964b9dd-kvpr7" (UID: "7fd4a737-55cc-447b-a14a-e5f46b1b392d") : secret "metrics-server-cert" not found Jan 22 10:40:14 crc kubenswrapper[4752]: E0122 10:40:14.033054 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-webhook-certs podName:7fd4a737-55cc-447b-a14a-e5f46b1b392d nodeName:}" failed. No retries permitted until 2026-01-22 10:40:16.033032511 +0000 UTC m=+895.262975419 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-webhook-certs") pod "openstack-operator-controller-manager-fd964b9dd-kvpr7" (UID: "7fd4a737-55cc-447b-a14a-e5f46b1b392d") : secret "webhook-server-cert" not found Jan 22 10:40:14 crc kubenswrapper[4752]: I0122 10:40:14.741893 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-8dc8cff97-5ff28" event={"ID":"36859a98-81f1-4ad5-aceb-b0013bf8aa42","Type":"ContainerStarted","Data":"eca7ebe7fc5dc56fcba2e6e01ef92a06057eb75168cd3f8456be014fbeae0999"} Jan 22 10:40:14 crc kubenswrapper[4752]: I0122 10:40:14.745041 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-rpf8x" event={"ID":"a0267408-c09a-435a-94f6-5613498da9ca","Type":"ContainerStarted","Data":"f8263e752f715fd11181044af9079e80e59522af141f61aac283251d8e81dac1"} Jan 22 10:40:14 crc kubenswrapper[4752]: E0122 10:40:14.746533 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-rpf8x" podUID="a0267408-c09a-435a-94f6-5613498da9ca" Jan 22 10:40:14 crc kubenswrapper[4752]: I0122 10:40:14.748230 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vfqwz" event={"ID":"ec51e290-a33a-47de-a766-85507801ff1b","Type":"ContainerStarted","Data":"b9258d1c399f14da586f43d5f2a0bb89aa14fb8068a768b83e565d70b249f716"} Jan 22 10:40:14 crc kubenswrapper[4752]: E0122 10:40:14.750626 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vfqwz" podUID="ec51e290-a33a-47de-a766-85507801ff1b" Jan 22 10:40:14 crc kubenswrapper[4752]: E0122 10:40:14.750766 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:b57d65d2a968705b9067192a7cb33bd4a12489db87e1d05de78c076f2062cab4\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-pqzp4" podUID="82cf7c76-cd84-425a-acc7-9c3d9842eb96" Jan 22 10:40:14 crc kubenswrapper[4752]: E0122 10:40:14.751088 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-5ks2h" podUID="2fbc4a3c-0772-406d-8731-b49f80fa109c" Jan 22 10:40:14 crc kubenswrapper[4752]: E0122 10:40:14.751165 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-p5srq" podUID="4036fb44-c28e-4cb4-9284-209765d5a53d" Jan 22 10:40:14 crc kubenswrapper[4752]: E0122 10:40:14.751477 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-7vg6x" podUID="62ed39f1-2d59-480f-a71b-20dbdb1a346a" Jan 22 10:40:14 crc kubenswrapper[4752]: E0122 10:40:14.751793 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xvznb" podUID="70115a40-1b24-4548-bce1-41babb6186c4" Jan 22 10:40:14 crc kubenswrapper[4752]: E0122 10:40:14.752542 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qmpws" podUID="a1ada694-ef02-43d6-bfa7-98c3437af5bc" Jan 22 10:40:15 crc kubenswrapper[4752]: I0122 10:40:15.580991 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d5d5722-0553-4d40-b618-a1c6d2e9f727-cert\") pod \"infra-operator-controller-manager-6b5d9f997d-clxq7\" (UID: \"2d5d5722-0553-4d40-b618-a1c6d2e9f727\") " pod="openstack-operators/infra-operator-controller-manager-6b5d9f997d-clxq7" Jan 22 10:40:15 crc kubenswrapper[4752]: E0122 10:40:15.581198 4752 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 10:40:15 crc kubenswrapper[4752]: E0122 10:40:15.581287 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d5d5722-0553-4d40-b618-a1c6d2e9f727-cert podName:2d5d5722-0553-4d40-b618-a1c6d2e9f727 nodeName:}" failed. No retries permitted until 2026-01-22 10:40:19.58126605 +0000 UTC m=+898.811208958 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2d5d5722-0553-4d40-b618-a1c6d2e9f727-cert") pod "infra-operator-controller-manager-6b5d9f997d-clxq7" (UID: "2d5d5722-0553-4d40-b618-a1c6d2e9f727") : secret "infra-operator-webhook-server-cert" not found Jan 22 10:40:15 crc kubenswrapper[4752]: I0122 10:40:15.762965 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9gmm2" podUID="48e9665e-d388-4637-b628-34c7a8fc4357" containerName="registry-server" containerID="cri-o://f6ca29422e8ead25b199f3c0854c1606f594038573c17af13f7a7994056512f3" gracePeriod=2 Jan 22 10:40:15 crc kubenswrapper[4752]: E0122 10:40:15.764035 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vfqwz" podUID="ec51e290-a33a-47de-a766-85507801ff1b" Jan 22 10:40:15 crc kubenswrapper[4752]: E0122 10:40:15.768154 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-rpf8x" podUID="a0267408-c09a-435a-94f6-5613498da9ca" Jan 22 10:40:15 crc kubenswrapper[4752]: I0122 10:40:15.784954 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ffb1d469-6623-4086-a40b-66f153b47bcf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854bthdw\" (UID: \"ffb1d469-6623-4086-a40b-66f153b47bcf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bthdw" Jan 22 10:40:15 crc kubenswrapper[4752]: E0122 10:40:15.785119 4752 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 10:40:15 crc kubenswrapper[4752]: E0122 10:40:15.785170 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffb1d469-6623-4086-a40b-66f153b47bcf-cert podName:ffb1d469-6623-4086-a40b-66f153b47bcf nodeName:}" failed. No retries permitted until 2026-01-22 10:40:19.785155986 +0000 UTC m=+899.015098894 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ffb1d469-6623-4086-a40b-66f153b47bcf-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854bthdw" (UID: "ffb1d469-6623-4086-a40b-66f153b47bcf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 10:40:16 crc kubenswrapper[4752]: I0122 10:40:16.089445 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-webhook-certs\") pod \"openstack-operator-controller-manager-fd964b9dd-kvpr7\" (UID: \"7fd4a737-55cc-447b-a14a-e5f46b1b392d\") " pod="openstack-operators/openstack-operator-controller-manager-fd964b9dd-kvpr7" Jan 22 10:40:16 crc kubenswrapper[4752]: I0122 10:40:16.089833 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-metrics-certs\") pod \"openstack-operator-controller-manager-fd964b9dd-kvpr7\" (UID: \"7fd4a737-55cc-447b-a14a-e5f46b1b392d\") " pod="openstack-operators/openstack-operator-controller-manager-fd964b9dd-kvpr7" Jan 22 10:40:16 crc kubenswrapper[4752]: E0122 10:40:16.089587 4752 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 10:40:16 crc kubenswrapper[4752]: E0122 10:40:16.090256 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-webhook-certs podName:7fd4a737-55cc-447b-a14a-e5f46b1b392d nodeName:}" failed. No retries permitted until 2026-01-22 10:40:20.090235915 +0000 UTC m=+899.320178823 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-webhook-certs") pod "openstack-operator-controller-manager-fd964b9dd-kvpr7" (UID: "7fd4a737-55cc-447b-a14a-e5f46b1b392d") : secret "webhook-server-cert" not found Jan 22 10:40:16 crc kubenswrapper[4752]: E0122 10:40:16.090166 4752 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 10:40:16 crc kubenswrapper[4752]: E0122 10:40:16.090448 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-metrics-certs podName:7fd4a737-55cc-447b-a14a-e5f46b1b392d nodeName:}" failed. No retries permitted until 2026-01-22 10:40:20.090435661 +0000 UTC m=+899.320378569 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-metrics-certs") pod "openstack-operator-controller-manager-fd964b9dd-kvpr7" (UID: "7fd4a737-55cc-447b-a14a-e5f46b1b392d") : secret "metrics-server-cert" not found Jan 22 10:40:16 crc kubenswrapper[4752]: I0122 10:40:16.780963 4752 generic.go:334] "Generic (PLEG): container finished" podID="48e9665e-d388-4637-b628-34c7a8fc4357" containerID="f6ca29422e8ead25b199f3c0854c1606f594038573c17af13f7a7994056512f3" exitCode=0 Jan 22 10:40:16 crc kubenswrapper[4752]: I0122 10:40:16.781010 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gmm2" event={"ID":"48e9665e-d388-4637-b628-34c7a8fc4357","Type":"ContainerDied","Data":"f6ca29422e8ead25b199f3c0854c1606f594038573c17af13f7a7994056512f3"} Jan 22 10:40:19 crc kubenswrapper[4752]: I0122 10:40:19.645368 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d5d5722-0553-4d40-b618-a1c6d2e9f727-cert\") pod \"infra-operator-controller-manager-6b5d9f997d-clxq7\" (UID: \"2d5d5722-0553-4d40-b618-a1c6d2e9f727\") " pod="openstack-operators/infra-operator-controller-manager-6b5d9f997d-clxq7" Jan 22 10:40:19 crc kubenswrapper[4752]: E0122 10:40:19.645530 4752 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 10:40:19 crc kubenswrapper[4752]: E0122 10:40:19.645718 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d5d5722-0553-4d40-b618-a1c6d2e9f727-cert podName:2d5d5722-0553-4d40-b618-a1c6d2e9f727 nodeName:}" failed. No retries permitted until 2026-01-22 10:40:27.645702271 +0000 UTC m=+906.875645179 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2d5d5722-0553-4d40-b618-a1c6d2e9f727-cert") pod "infra-operator-controller-manager-6b5d9f997d-clxq7" (UID: "2d5d5722-0553-4d40-b618-a1c6d2e9f727") : secret "infra-operator-webhook-server-cert" not found Jan 22 10:40:19 crc kubenswrapper[4752]: I0122 10:40:19.849319 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ffb1d469-6623-4086-a40b-66f153b47bcf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854bthdw\" (UID: \"ffb1d469-6623-4086-a40b-66f153b47bcf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bthdw" Jan 22 10:40:19 crc kubenswrapper[4752]: E0122 10:40:19.849520 4752 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 10:40:19 crc kubenswrapper[4752]: E0122 10:40:19.849883 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffb1d469-6623-4086-a40b-66f153b47bcf-cert podName:ffb1d469-6623-4086-a40b-66f153b47bcf nodeName:}" failed. No retries permitted until 2026-01-22 10:40:27.849837703 +0000 UTC m=+907.079780611 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ffb1d469-6623-4086-a40b-66f153b47bcf-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854bthdw" (UID: "ffb1d469-6623-4086-a40b-66f153b47bcf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 10:40:20 crc kubenswrapper[4752]: I0122 10:40:20.154577 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-webhook-certs\") pod \"openstack-operator-controller-manager-fd964b9dd-kvpr7\" (UID: \"7fd4a737-55cc-447b-a14a-e5f46b1b392d\") " pod="openstack-operators/openstack-operator-controller-manager-fd964b9dd-kvpr7" Jan 22 10:40:20 crc kubenswrapper[4752]: I0122 10:40:20.154684 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-metrics-certs\") pod \"openstack-operator-controller-manager-fd964b9dd-kvpr7\" (UID: \"7fd4a737-55cc-447b-a14a-e5f46b1b392d\") " pod="openstack-operators/openstack-operator-controller-manager-fd964b9dd-kvpr7" Jan 22 10:40:20 crc kubenswrapper[4752]: E0122 10:40:20.154919 4752 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 10:40:20 crc kubenswrapper[4752]: E0122 10:40:20.155034 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-webhook-certs podName:7fd4a737-55cc-447b-a14a-e5f46b1b392d nodeName:}" failed. No retries permitted until 2026-01-22 10:40:28.155005944 +0000 UTC m=+907.384948892 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-webhook-certs") pod "openstack-operator-controller-manager-fd964b9dd-kvpr7" (UID: "7fd4a737-55cc-447b-a14a-e5f46b1b392d") : secret "webhook-server-cert" not found Jan 22 10:40:20 crc kubenswrapper[4752]: E0122 10:40:20.155034 4752 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 10:40:20 crc kubenswrapper[4752]: E0122 10:40:20.155144 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-metrics-certs podName:7fd4a737-55cc-447b-a14a-e5f46b1b392d nodeName:}" failed. No retries permitted until 2026-01-22 10:40:28.155114687 +0000 UTC m=+907.385057625 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-metrics-certs") pod "openstack-operator-controller-manager-fd964b9dd-kvpr7" (UID: "7fd4a737-55cc-447b-a14a-e5f46b1b392d") : secret "metrics-server-cert" not found Jan 22 10:40:23 crc kubenswrapper[4752]: E0122 10:40:23.389000 4752 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f6ca29422e8ead25b199f3c0854c1606f594038573c17af13f7a7994056512f3 is running failed: container process not found" containerID="f6ca29422e8ead25b199f3c0854c1606f594038573c17af13f7a7994056512f3" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 10:40:23 crc kubenswrapper[4752]: E0122 10:40:23.390848 4752 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f6ca29422e8ead25b199f3c0854c1606f594038573c17af13f7a7994056512f3 is running failed: container process not found" containerID="f6ca29422e8ead25b199f3c0854c1606f594038573c17af13f7a7994056512f3" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 10:40:23 crc kubenswrapper[4752]: E0122 10:40:23.392745 4752 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f6ca29422e8ead25b199f3c0854c1606f594038573c17af13f7a7994056512f3 is running failed: container process not found" containerID="f6ca29422e8ead25b199f3c0854c1606f594038573c17af13f7a7994056512f3" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 10:40:23 crc kubenswrapper[4752]: E0122 10:40:23.392806 4752 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f6ca29422e8ead25b199f3c0854c1606f594038573c17af13f7a7994056512f3 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-9gmm2" podUID="48e9665e-d388-4637-b628-34c7a8fc4357" containerName="registry-server" Jan 22 10:40:27 crc kubenswrapper[4752]: I0122 10:40:27.731289 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d5d5722-0553-4d40-b618-a1c6d2e9f727-cert\") pod \"infra-operator-controller-manager-6b5d9f997d-clxq7\" (UID: \"2d5d5722-0553-4d40-b618-a1c6d2e9f727\") " pod="openstack-operators/infra-operator-controller-manager-6b5d9f997d-clxq7" Jan 22 10:40:27 crc kubenswrapper[4752]: I0122 10:40:27.739376 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d5d5722-0553-4d40-b618-a1c6d2e9f727-cert\") pod \"infra-operator-controller-manager-6b5d9f997d-clxq7\" (UID: \"2d5d5722-0553-4d40-b618-a1c6d2e9f727\") " pod="openstack-operators/infra-operator-controller-manager-6b5d9f997d-clxq7" Jan 22 10:40:27 crc kubenswrapper[4752]: I0122 10:40:27.904753 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-x4nnn" Jan 22 10:40:27 crc kubenswrapper[4752]: I0122 10:40:27.913296 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6b5d9f997d-clxq7" Jan 22 10:40:27 crc kubenswrapper[4752]: I0122 10:40:27.933941 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ffb1d469-6623-4086-a40b-66f153b47bcf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854bthdw\" (UID: \"ffb1d469-6623-4086-a40b-66f153b47bcf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bthdw" Jan 22 10:40:27 crc kubenswrapper[4752]: I0122 10:40:27.941554 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ffb1d469-6623-4086-a40b-66f153b47bcf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854bthdw\" (UID: \"ffb1d469-6623-4086-a40b-66f153b47bcf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bthdw" Jan 22 10:40:28 crc kubenswrapper[4752]: E0122 10:40:28.026096 4752 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece" Jan 22 10:40:28 crc kubenswrapper[4752]: E0122 10:40:28.026306 4752 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hpgs6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-b45d7bf98-j2fbt_openstack-operators(d60c858d-c3bc-4088-96c8-2fb5d865826a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 10:40:28 crc kubenswrapper[4752]: E0122 10:40:28.027550 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-j2fbt" podUID="d60c858d-c3bc-4088-96c8-2fb5d865826a" Jan 22 10:40:28 crc kubenswrapper[4752]: I0122 10:40:28.221358 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-v7fw8" Jan 22 10:40:28 crc kubenswrapper[4752]: I0122 10:40:28.230299 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bthdw" Jan 22 10:40:28 crc kubenswrapper[4752]: I0122 10:40:28.240497 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-metrics-certs\") pod \"openstack-operator-controller-manager-fd964b9dd-kvpr7\" (UID: \"7fd4a737-55cc-447b-a14a-e5f46b1b392d\") " pod="openstack-operators/openstack-operator-controller-manager-fd964b9dd-kvpr7" Jan 22 10:40:28 crc kubenswrapper[4752]: I0122 10:40:28.240649 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-webhook-certs\") pod \"openstack-operator-controller-manager-fd964b9dd-kvpr7\" (UID: \"7fd4a737-55cc-447b-a14a-e5f46b1b392d\") " pod="openstack-operators/openstack-operator-controller-manager-fd964b9dd-kvpr7" Jan 22 10:40:28 crc kubenswrapper[4752]: E0122 10:40:28.240765 4752 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 10:40:28 crc kubenswrapper[4752]: E0122 10:40:28.240810 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-webhook-certs podName:7fd4a737-55cc-447b-a14a-e5f46b1b392d nodeName:}" failed. No retries permitted until 2026-01-22 10:40:44.240795111 +0000 UTC m=+923.470738019 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-webhook-certs") pod "openstack-operator-controller-manager-fd964b9dd-kvpr7" (UID: "7fd4a737-55cc-447b-a14a-e5f46b1b392d") : secret "webhook-server-cert" not found Jan 22 10:40:28 crc kubenswrapper[4752]: I0122 10:40:28.245018 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-metrics-certs\") pod \"openstack-operator-controller-manager-fd964b9dd-kvpr7\" (UID: \"7fd4a737-55cc-447b-a14a-e5f46b1b392d\") " pod="openstack-operators/openstack-operator-controller-manager-fd964b9dd-kvpr7" Jan 22 10:40:28 crc kubenswrapper[4752]: E0122 10:40:28.874786 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece\\\"\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-j2fbt" podUID="d60c858d-c3bc-4088-96c8-2fb5d865826a" Jan 22 10:40:31 crc kubenswrapper[4752]: E0122 10:40:31.226534 4752 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 22 10:40:31 crc kubenswrapper[4752]: E0122 10:40:31.227849 4752 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-prhsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-jqfns_openstack-operators(af9949b6-e7f5-409e-8d78-012a9298233b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 10:40:31 crc kubenswrapper[4752]: E0122 10:40:31.229527 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-jqfns" podUID="af9949b6-e7f5-409e-8d78-012a9298233b" Jan 22 10:40:31 crc kubenswrapper[4752]: E0122 10:40:31.902411 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-jqfns" podUID="af9949b6-e7f5-409e-8d78-012a9298233b" Jan 22 10:40:31 crc kubenswrapper[4752]: E0122 10:40:31.907123 4752 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 22 10:40:31 crc kubenswrapper[4752]: E0122 10:40:31.907298 4752 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5dp9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-62w75_openstack-operators(edb8c589-c1ae-4438-b32d-2dcdfec470f6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 10:40:31 crc kubenswrapper[4752]: E0122 10:40:31.909078 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-62w75" podUID="edb8c589-c1ae-4438-b32d-2dcdfec470f6" Jan 22 10:40:32 crc kubenswrapper[4752]: E0122 10:40:32.907638 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-62w75" podUID="edb8c589-c1ae-4438-b32d-2dcdfec470f6" Jan 22 10:40:33 crc kubenswrapper[4752]: E0122 10:40:33.389596 4752 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f6ca29422e8ead25b199f3c0854c1606f594038573c17af13f7a7994056512f3 is running failed: container process not found" containerID="f6ca29422e8ead25b199f3c0854c1606f594038573c17af13f7a7994056512f3" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 10:40:33 crc kubenswrapper[4752]: E0122 10:40:33.390185 4752 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f6ca29422e8ead25b199f3c0854c1606f594038573c17af13f7a7994056512f3 is running failed: container process not found" containerID="f6ca29422e8ead25b199f3c0854c1606f594038573c17af13f7a7994056512f3" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 10:40:33 crc kubenswrapper[4752]: E0122 10:40:33.393814 4752 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f6ca29422e8ead25b199f3c0854c1606f594038573c17af13f7a7994056512f3 is running failed: container process not found" containerID="f6ca29422e8ead25b199f3c0854c1606f594038573c17af13f7a7994056512f3" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 10:40:33 crc kubenswrapper[4752]: E0122 10:40:33.393880 4752 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f6ca29422e8ead25b199f3c0854c1606f594038573c17af13f7a7994056512f3 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-9gmm2" podUID="48e9665e-d388-4637-b628-34c7a8fc4357" containerName="registry-server" Jan 22 10:40:34 crc kubenswrapper[4752]: E0122 10:40:34.207471 4752 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71" Jan 22 10:40:34 crc kubenswrapper[4752]: E0122 10:40:34.208031 4752 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tv2hb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-c87fff755-6pt5z_openstack-operators(79910a26-5ba2-4244-8880-31bd1322aec0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 10:40:34 crc kubenswrapper[4752]: E0122 10:40:34.209943 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-6pt5z" podUID="79910a26-5ba2-4244-8880-31bd1322aec0" Jan 22 10:40:34 crc kubenswrapper[4752]: E0122 10:40:34.913115 4752 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:d3c55b59cb192799f8d31196c55c9e9bb3cd38aef7ec51ef257dabf1548e8b30" Jan 22 10:40:34 crc kubenswrapper[4752]: E0122 10:40:34.913381 4752 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:d3c55b59cb192799f8d31196c55c9e9bb3cd38aef7ec51ef257dabf1548e8b30,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fwm6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-69d6c9f5b8-2vmkm_openstack-operators(12ce9837-825a-4ee8-a0b7-6d65a6c1766c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 10:40:34 crc kubenswrapper[4752]: E0122 10:40:34.914697 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-2vmkm" podUID="12ce9837-825a-4ee8-a0b7-6d65a6c1766c" Jan 22 10:40:34 crc kubenswrapper[4752]: I0122 10:40:34.921178 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gmm2" event={"ID":"48e9665e-d388-4637-b628-34c7a8fc4357","Type":"ContainerDied","Data":"40b5fcfd65b7dec5b4abaa6ea9f459eb1a65c4e6b41c707889bf6cad0504fbcf"} Jan 22 10:40:34 crc kubenswrapper[4752]: I0122 10:40:34.921280 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40b5fcfd65b7dec5b4abaa6ea9f459eb1a65c4e6b41c707889bf6cad0504fbcf" Jan 22 10:40:34 crc kubenswrapper[4752]: E0122 10:40:34.929120 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-6pt5z" podUID="79910a26-5ba2-4244-8880-31bd1322aec0" Jan 22 10:40:34 crc kubenswrapper[4752]: I0122 10:40:34.976602 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9gmm2" Jan 22 10:40:35 crc kubenswrapper[4752]: I0122 10:40:35.080560 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48e9665e-d388-4637-b628-34c7a8fc4357-utilities\") pod \"48e9665e-d388-4637-b628-34c7a8fc4357\" (UID: \"48e9665e-d388-4637-b628-34c7a8fc4357\") " Jan 22 10:40:35 crc kubenswrapper[4752]: I0122 10:40:35.080628 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48e9665e-d388-4637-b628-34c7a8fc4357-catalog-content\") pod \"48e9665e-d388-4637-b628-34c7a8fc4357\" (UID: \"48e9665e-d388-4637-b628-34c7a8fc4357\") " Jan 22 10:40:35 crc kubenswrapper[4752]: I0122 10:40:35.080740 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwzkr\" (UniqueName: \"kubernetes.io/projected/48e9665e-d388-4637-b628-34c7a8fc4357-kube-api-access-xwzkr\") pod \"48e9665e-d388-4637-b628-34c7a8fc4357\" (UID: \"48e9665e-d388-4637-b628-34c7a8fc4357\") " Jan 22 10:40:35 crc kubenswrapper[4752]: I0122 10:40:35.082151 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48e9665e-d388-4637-b628-34c7a8fc4357-utilities" (OuterVolumeSpecName: "utilities") pod "48e9665e-d388-4637-b628-34c7a8fc4357" (UID: "48e9665e-d388-4637-b628-34c7a8fc4357"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:40:35 crc kubenswrapper[4752]: I0122 10:40:35.088418 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48e9665e-d388-4637-b628-34c7a8fc4357-kube-api-access-xwzkr" (OuterVolumeSpecName: "kube-api-access-xwzkr") pod "48e9665e-d388-4637-b628-34c7a8fc4357" (UID: "48e9665e-d388-4637-b628-34c7a8fc4357"). InnerVolumeSpecName "kube-api-access-xwzkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:40:35 crc kubenswrapper[4752]: I0122 10:40:35.130930 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48e9665e-d388-4637-b628-34c7a8fc4357-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48e9665e-d388-4637-b628-34c7a8fc4357" (UID: "48e9665e-d388-4637-b628-34c7a8fc4357"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:40:35 crc kubenswrapper[4752]: I0122 10:40:35.182409 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48e9665e-d388-4637-b628-34c7a8fc4357-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:35 crc kubenswrapper[4752]: I0122 10:40:35.182438 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48e9665e-d388-4637-b628-34c7a8fc4357-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:35 crc kubenswrapper[4752]: I0122 10:40:35.182450 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwzkr\" (UniqueName: \"kubernetes.io/projected/48e9665e-d388-4637-b628-34c7a8fc4357-kube-api-access-xwzkr\") on node \"crc\" DevicePath \"\"" Jan 22 10:40:35 crc kubenswrapper[4752]: I0122 10:40:35.927288 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9gmm2" Jan 22 10:40:35 crc kubenswrapper[4752]: E0122 10:40:35.935648 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:d3c55b59cb192799f8d31196c55c9e9bb3cd38aef7ec51ef257dabf1548e8b30\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-2vmkm" podUID="12ce9837-825a-4ee8-a0b7-6d65a6c1766c" Jan 22 10:40:35 crc kubenswrapper[4752]: I0122 10:40:35.982692 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9gmm2"] Jan 22 10:40:35 crc kubenswrapper[4752]: I0122 10:40:35.991091 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9gmm2"] Jan 22 10:40:37 crc kubenswrapper[4752]: I0122 10:40:37.109692 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48e9665e-d388-4637-b628-34c7a8fc4357" path="/var/lib/kubelet/pods/48e9665e-d388-4637-b628-34c7a8fc4357/volumes" Jan 22 10:40:43 crc kubenswrapper[4752]: I0122 10:40:43.033598 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6b5d9f997d-clxq7"] Jan 22 10:40:43 crc kubenswrapper[4752]: I0122 10:40:43.278524 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xcsm9"] Jan 22 10:40:43 crc kubenswrapper[4752]: E0122 10:40:43.279588 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48e9665e-d388-4637-b628-34c7a8fc4357" containerName="extract-utilities" Jan 22 10:40:43 crc kubenswrapper[4752]: I0122 10:40:43.279610 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="48e9665e-d388-4637-b628-34c7a8fc4357" containerName="extract-utilities" Jan 22 10:40:43 crc kubenswrapper[4752]: E0122 10:40:43.279637 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48e9665e-d388-4637-b628-34c7a8fc4357" containerName="extract-content" Jan 22 10:40:43 crc kubenswrapper[4752]: I0122 10:40:43.279644 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="48e9665e-d388-4637-b628-34c7a8fc4357" containerName="extract-content" Jan 22 10:40:43 crc kubenswrapper[4752]: E0122 10:40:43.279659 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48e9665e-d388-4637-b628-34c7a8fc4357" containerName="registry-server" Jan 22 10:40:43 crc kubenswrapper[4752]: I0122 10:40:43.279666 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="48e9665e-d388-4637-b628-34c7a8fc4357" containerName="registry-server" Jan 22 10:40:43 crc kubenswrapper[4752]: I0122 10:40:43.279873 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="48e9665e-d388-4637-b628-34c7a8fc4357" containerName="registry-server" Jan 22 10:40:43 crc kubenswrapper[4752]: I0122 10:40:43.280837 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcsm9" Jan 22 10:40:43 crc kubenswrapper[4752]: I0122 10:40:43.289888 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xcsm9"] Jan 22 10:40:43 crc kubenswrapper[4752]: I0122 10:40:43.336396 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0146b151-dc32-4af8-810f-f27edca3959e-utilities\") pod \"certified-operators-xcsm9\" (UID: \"0146b151-dc32-4af8-810f-f27edca3959e\") " pod="openshift-marketplace/certified-operators-xcsm9" Jan 22 10:40:43 crc kubenswrapper[4752]: I0122 10:40:43.336490 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0146b151-dc32-4af8-810f-f27edca3959e-catalog-content\") pod \"certified-operators-xcsm9\" (UID: \"0146b151-dc32-4af8-810f-f27edca3959e\") " pod="openshift-marketplace/certified-operators-xcsm9" Jan 22 10:40:43 crc kubenswrapper[4752]: I0122 10:40:43.336519 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czs6d\" (UniqueName: \"kubernetes.io/projected/0146b151-dc32-4af8-810f-f27edca3959e-kube-api-access-czs6d\") pod \"certified-operators-xcsm9\" (UID: \"0146b151-dc32-4af8-810f-f27edca3959e\") " pod="openshift-marketplace/certified-operators-xcsm9" Jan 22 10:40:43 crc kubenswrapper[4752]: I0122 10:40:43.437359 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0146b151-dc32-4af8-810f-f27edca3959e-utilities\") pod \"certified-operators-xcsm9\" (UID: \"0146b151-dc32-4af8-810f-f27edca3959e\") " pod="openshift-marketplace/certified-operators-xcsm9" Jan 22 10:40:43 crc kubenswrapper[4752]: I0122 10:40:43.437437 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0146b151-dc32-4af8-810f-f27edca3959e-catalog-content\") pod \"certified-operators-xcsm9\" (UID: \"0146b151-dc32-4af8-810f-f27edca3959e\") " pod="openshift-marketplace/certified-operators-xcsm9" Jan 22 10:40:43 crc kubenswrapper[4752]: I0122 10:40:43.437459 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czs6d\" (UniqueName: \"kubernetes.io/projected/0146b151-dc32-4af8-810f-f27edca3959e-kube-api-access-czs6d\") pod \"certified-operators-xcsm9\" (UID: \"0146b151-dc32-4af8-810f-f27edca3959e\") " pod="openshift-marketplace/certified-operators-xcsm9" Jan 22 10:40:43 crc kubenswrapper[4752]: I0122 10:40:43.438169 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0146b151-dc32-4af8-810f-f27edca3959e-utilities\") pod \"certified-operators-xcsm9\" (UID: \"0146b151-dc32-4af8-810f-f27edca3959e\") " pod="openshift-marketplace/certified-operators-xcsm9" Jan 22 10:40:43 crc kubenswrapper[4752]: I0122 10:40:43.438416 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0146b151-dc32-4af8-810f-f27edca3959e-catalog-content\") pod \"certified-operators-xcsm9\" (UID: \"0146b151-dc32-4af8-810f-f27edca3959e\") " pod="openshift-marketplace/certified-operators-xcsm9" Jan 22 10:40:43 crc kubenswrapper[4752]: I0122 10:40:43.462038 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czs6d\" (UniqueName: \"kubernetes.io/projected/0146b151-dc32-4af8-810f-f27edca3959e-kube-api-access-czs6d\") pod \"certified-operators-xcsm9\" (UID: \"0146b151-dc32-4af8-810f-f27edca3959e\") " pod="openshift-marketplace/certified-operators-xcsm9" Jan 22 10:40:43 crc kubenswrapper[4752]: I0122 10:40:43.611399 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcsm9" Jan 22 10:40:44 crc kubenswrapper[4752]: I0122 10:40:44.024269 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-5w4l6" event={"ID":"488cb72d-e028-497a-983c-a0a47113e285","Type":"ContainerStarted","Data":"bc5eb5b5bbeb1f2d24ed7adeff5da4f308149b12ba930f11935526bc7417061a"} Jan 22 10:40:44 crc kubenswrapper[4752]: I0122 10:40:44.024757 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-5w4l6" Jan 22 10:40:44 crc kubenswrapper[4752]: I0122 10:40:44.031659 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6b5d9f997d-clxq7" event={"ID":"2d5d5722-0553-4d40-b618-a1c6d2e9f727","Type":"ContainerStarted","Data":"f7c4dbc3d79d7170af7059ca029216d22daa60233f548b8b58073a1068054fde"} Jan 22 10:40:44 crc kubenswrapper[4752]: I0122 10:40:44.046808 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-5w4l6" podStartSLOduration=9.481141711 podStartE2EDuration="33.046790204s" podCreationTimestamp="2026-01-22 10:40:11 +0000 UTC" firstStartedPulling="2026-01-22 10:40:12.864941226 +0000 UTC m=+892.094884134" lastFinishedPulling="2026-01-22 10:40:36.430589719 +0000 UTC m=+915.660532627" observedRunningTime="2026-01-22 10:40:44.045903201 +0000 UTC m=+923.275846119" watchObservedRunningTime="2026-01-22 10:40:44.046790204 +0000 UTC m=+923.276733112" Jan 22 10:40:44 crc kubenswrapper[4752]: I0122 10:40:44.100542 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bthdw"] Jan 22 10:40:44 crc kubenswrapper[4752]: I0122 10:40:44.269751 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-webhook-certs\") pod \"openstack-operator-controller-manager-fd964b9dd-kvpr7\" (UID: \"7fd4a737-55cc-447b-a14a-e5f46b1b392d\") " pod="openstack-operators/openstack-operator-controller-manager-fd964b9dd-kvpr7" Jan 22 10:40:44 crc kubenswrapper[4752]: I0122 10:40:44.307843 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7fd4a737-55cc-447b-a14a-e5f46b1b392d-webhook-certs\") pod \"openstack-operator-controller-manager-fd964b9dd-kvpr7\" (UID: \"7fd4a737-55cc-447b-a14a-e5f46b1b392d\") " pod="openstack-operators/openstack-operator-controller-manager-fd964b9dd-kvpr7" Jan 22 10:40:44 crc kubenswrapper[4752]: I0122 10:40:44.540529 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-pxsv5" Jan 22 10:40:44 crc kubenswrapper[4752]: I0122 10:40:44.548794 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-fd964b9dd-kvpr7" Jan 22 10:40:44 crc kubenswrapper[4752]: I0122 10:40:44.646397 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xcsm9"] Jan 22 10:40:44 crc kubenswrapper[4752]: W0122 10:40:44.766678 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0146b151_dc32_4af8_810f_f27edca3959e.slice/crio-3ff22818b938e4b0378de03132b87337274c372460c73187318fac9e0b26d4d8 WatchSource:0}: Error finding container 3ff22818b938e4b0378de03132b87337274c372460c73187318fac9e0b26d4d8: Status 404 returned error can't find the container with id 3ff22818b938e4b0378de03132b87337274c372460c73187318fac9e0b26d4d8 Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.045033 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-5ks2h" event={"ID":"2fbc4a3c-0772-406d-8731-b49f80fa109c","Type":"ContainerStarted","Data":"26f2b5c76e4ab9a9a6b1197c43a05888928d4b93949805b451fa27a19e9bba44"} Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.046383 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-5ks2h" Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.049807 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-pqzp4" event={"ID":"82cf7c76-cd84-425a-acc7-9c3d9842eb96","Type":"ContainerStarted","Data":"3b591f2f584990f5aa20d221e5d713514d097639731132bf11449114c90a313a"} Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.050402 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-pqzp4" Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.076838 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcsm9" event={"ID":"0146b151-dc32-4af8-810f-f27edca3959e","Type":"ContainerStarted","Data":"3ff22818b938e4b0378de03132b87337274c372460c73187318fac9e0b26d4d8"} Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.093512 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nw62c" event={"ID":"a4e63413-7a90-4e3e-8380-000b611a7c9a","Type":"ContainerStarted","Data":"bd27bc37d81ee4f9a1abaecaf3e5f86f3d2c2cd681311a07388c1ca6d153674a"} Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.094254 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nw62c" Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.121701 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-5ks2h" podStartSLOduration=4.0028993 podStartE2EDuration="34.121686973s" podCreationTimestamp="2026-01-22 10:40:11 +0000 UTC" firstStartedPulling="2026-01-22 10:40:13.455798142 +0000 UTC m=+892.685741050" lastFinishedPulling="2026-01-22 10:40:43.574585815 +0000 UTC m=+922.804528723" observedRunningTime="2026-01-22 10:40:45.077508352 +0000 UTC m=+924.307451270" watchObservedRunningTime="2026-01-22 10:40:45.121686973 +0000 UTC m=+924.351629891" Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.137041 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-pqzp4" podStartSLOduration=3.975368566 podStartE2EDuration="34.137016976s" podCreationTimestamp="2026-01-22 10:40:11 +0000 UTC" firstStartedPulling="2026-01-22 10:40:13.440454188 +0000 UTC m=+892.670397096" lastFinishedPulling="2026-01-22 10:40:43.602102598 +0000 UTC m=+922.832045506" observedRunningTime="2026-01-22 10:40:45.120357438 +0000 UTC m=+924.350300346" watchObservedRunningTime="2026-01-22 10:40:45.137016976 +0000 UTC m=+924.366959884" Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.271934 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-qtrfq" Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.271968 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-8dc8cff97-5ff28" Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.271983 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-qtrfq" event={"ID":"730b2e5f-dab1-4ce8-85ef-3d8cf62c38b6","Type":"ContainerStarted","Data":"d86667426568997f584a8096b15d3afb2e08af301055a6244b15d56a965b57d9"} Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.271998 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vfqwz" event={"ID":"ec51e290-a33a-47de-a766-85507801ff1b","Type":"ContainerStarted","Data":"4338938e9f9c9dbc7b1d5fd022890d927a5c4186fc7d16b2467826a1c3a94a20"} Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.272009 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-7vg6x" event={"ID":"62ed39f1-2d59-480f-a71b-20dbdb1a346a","Type":"ContainerStarted","Data":"e6ff18da69ff36d36380cbdef0d6d338ac585bee114ad41524628259b814f872"} Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.272022 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-p5srq" event={"ID":"4036fb44-c28e-4cb4-9284-209765d5a53d","Type":"ContainerStarted","Data":"d3ea9da33af10a8edebf652246a3befe8844b1a604c8a332a4fcdc8126a90a80"} Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.272032 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-8dc8cff97-5ff28" event={"ID":"36859a98-81f1-4ad5-aceb-b0013bf8aa42","Type":"ContainerStarted","Data":"67a3623da11ce4b30a02b80107a1377a28ed1e121cb129485ef8f8d901b8ce2f"} Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.272616 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-p5srq" Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.272728 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-xrrtq" event={"ID":"fbffdae6-edb4-4812-904e-c1ef2783b477","Type":"ContainerStarted","Data":"1b482e941c757dc5234affad472db2f184d4e6d192a7af3212c89741aadde649"} Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.273195 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-xrrtq" Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.273368 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-7vg6x" Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.276916 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-kzvxf" event={"ID":"e2b564d9-5b5d-42f9-84c1-0ff33b54ff22","Type":"ContainerStarted","Data":"22453eb7d7c0f027b6909076c15e9e602485b9b7b94dcd50e83edff027608426"} Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.277377 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-kzvxf" Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.278239 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bthdw" event={"ID":"ffb1d469-6623-4086-a40b-66f153b47bcf","Type":"ContainerStarted","Data":"ddee6e5cbee2e94fe53a8891df53dba3546d9c770d4cefd2f290d7786c472263"} Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.279174 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-knf8v" event={"ID":"c11a96f0-c68c-4249-b1b3-f7e427444776","Type":"ContainerStarted","Data":"8105a3724b0c3df6f0ea534229e22b99b5451d2dedbd79f1f260eb390c16a63b"} Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.279516 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-knf8v" Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.282669 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qmpws" event={"ID":"a1ada694-ef02-43d6-bfa7-98c3437af5bc","Type":"ContainerStarted","Data":"db2296eef06a8bae781c6a9980fc99e671afd8f7b3849aaa2989b146b9b28ae0"} Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.283158 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qmpws" Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.284823 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-rpf8x" event={"ID":"a0267408-c09a-435a-94f6-5613498da9ca","Type":"ContainerStarted","Data":"b0181a1e8bd50880cd07961bda0a0d4dde52c89f173b94357687baa98acd4da6"} Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.285203 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-rpf8x" Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.286651 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xvznb" event={"ID":"70115a40-1b24-4548-bce1-41babb6186c4","Type":"ContainerStarted","Data":"61ac9fcfc773ae81a8af743db838bbd11e4c904b36d4bec6379eabf992868c13"} Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.287039 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xvznb" Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.369693 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nw62c" podStartSLOduration=11.285329204 podStartE2EDuration="34.36966979s" podCreationTimestamp="2026-01-22 10:40:11 +0000 UTC" firstStartedPulling="2026-01-22 10:40:13.34573406 +0000 UTC m=+892.575676958" lastFinishedPulling="2026-01-22 10:40:36.430074636 +0000 UTC m=+915.660017544" observedRunningTime="2026-01-22 10:40:45.189399722 +0000 UTC m=+924.419342640" watchObservedRunningTime="2026-01-22 10:40:45.36966979 +0000 UTC m=+924.599612708" Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.406576 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-qtrfq" podStartSLOduration=11.304459854 podStartE2EDuration="34.406544799s" podCreationTimestamp="2026-01-22 10:40:11 +0000 UTC" firstStartedPulling="2026-01-22 10:40:13.328527005 +0000 UTC m=+892.558469913" lastFinishedPulling="2026-01-22 10:40:36.43061195 +0000 UTC m=+915.660554858" observedRunningTime="2026-01-22 10:40:45.370395609 +0000 UTC m=+924.600338527" watchObservedRunningTime="2026-01-22 10:40:45.406544799 +0000 UTC m=+924.636487707" Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.416545 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-knf8v" podStartSLOduration=11.313916163 podStartE2EDuration="34.416494791s" podCreationTimestamp="2026-01-22 10:40:11 +0000 UTC" firstStartedPulling="2026-01-22 10:40:13.328089313 +0000 UTC m=+892.558032221" lastFinishedPulling="2026-01-22 10:40:36.430667921 +0000 UTC m=+915.660610849" observedRunningTime="2026-01-22 10:40:45.406141949 +0000 UTC m=+924.636084847" watchObservedRunningTime="2026-01-22 10:40:45.416494791 +0000 UTC m=+924.646437699" Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.567398 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-rpf8x" podStartSLOduration=4.70527592 podStartE2EDuration="34.567381256s" podCreationTimestamp="2026-01-22 10:40:11 +0000 UTC" firstStartedPulling="2026-01-22 10:40:13.705959298 +0000 UTC m=+892.935902206" lastFinishedPulling="2026-01-22 10:40:43.568064634 +0000 UTC m=+922.798007542" observedRunningTime="2026-01-22 10:40:45.565048085 +0000 UTC m=+924.794990993" watchObservedRunningTime="2026-01-22 10:40:45.567381256 +0000 UTC m=+924.797324164" Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.568051 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vfqwz" podStartSLOduration=3.299987969 podStartE2EDuration="33.568042684s" podCreationTimestamp="2026-01-22 10:40:12 +0000 UTC" firstStartedPulling="2026-01-22 10:40:13.703961904 +0000 UTC m=+892.933904812" lastFinishedPulling="2026-01-22 10:40:43.972016619 +0000 UTC m=+923.201959527" observedRunningTime="2026-01-22 10:40:45.45601637 +0000 UTC m=+924.685959278" watchObservedRunningTime="2026-01-22 10:40:45.568042684 +0000 UTC m=+924.797985592" Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.611559 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-fd964b9dd-kvpr7"] Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.625169 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-kzvxf" podStartSLOduration=11.528122204 podStartE2EDuration="34.625145404s" podCreationTimestamp="2026-01-22 10:40:11 +0000 UTC" firstStartedPulling="2026-01-22 10:40:13.330669683 +0000 UTC m=+892.560612581" lastFinishedPulling="2026-01-22 10:40:36.427692873 +0000 UTC m=+915.657635781" observedRunningTime="2026-01-22 10:40:45.603088735 +0000 UTC m=+924.833031643" watchObservedRunningTime="2026-01-22 10:40:45.625145404 +0000 UTC m=+924.855088312" Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.651493 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-p5srq" podStartSLOduration=4.52278477 podStartE2EDuration="34.651478506s" podCreationTimestamp="2026-01-22 10:40:11 +0000 UTC" firstStartedPulling="2026-01-22 10:40:13.452044481 +0000 UTC m=+892.681987389" lastFinishedPulling="2026-01-22 10:40:43.580738217 +0000 UTC m=+922.810681125" observedRunningTime="2026-01-22 10:40:45.647677407 +0000 UTC m=+924.877620315" watchObservedRunningTime="2026-01-22 10:40:45.651478506 +0000 UTC m=+924.881421414" Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.689200 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xvznb" podStartSLOduration=4.559893885 podStartE2EDuration="34.689184447s" podCreationTimestamp="2026-01-22 10:40:11 +0000 UTC" firstStartedPulling="2026-01-22 10:40:13.451446055 +0000 UTC m=+892.681388963" lastFinishedPulling="2026-01-22 10:40:43.580736617 +0000 UTC m=+922.810679525" observedRunningTime="2026-01-22 10:40:45.685647274 +0000 UTC m=+924.915590192" watchObservedRunningTime="2026-01-22 10:40:45.689184447 +0000 UTC m=+924.919127355" Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.721310 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-xrrtq" podStartSLOduration=11.721405665 podStartE2EDuration="34.721290291s" podCreationTimestamp="2026-01-22 10:40:11 +0000 UTC" firstStartedPulling="2026-01-22 10:40:13.427848028 +0000 UTC m=+892.657790936" lastFinishedPulling="2026-01-22 10:40:36.427732654 +0000 UTC m=+915.657675562" observedRunningTime="2026-01-22 10:40:45.720068319 +0000 UTC m=+924.950011227" watchObservedRunningTime="2026-01-22 10:40:45.721290291 +0000 UTC m=+924.951233199" Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.779486 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-8dc8cff97-5ff28" podStartSLOduration=10.487603478 podStartE2EDuration="33.77946173s" podCreationTimestamp="2026-01-22 10:40:12 +0000 UTC" firstStartedPulling="2026-01-22 10:40:13.671033285 +0000 UTC m=+892.900976193" lastFinishedPulling="2026-01-22 10:40:36.962891547 +0000 UTC m=+916.192834445" observedRunningTime="2026-01-22 10:40:45.748182528 +0000 UTC m=+924.978125436" watchObservedRunningTime="2026-01-22 10:40:45.77946173 +0000 UTC m=+925.009404638" Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.801166 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qmpws" podStartSLOduration=4.639172897 podStartE2EDuration="34.8011489s" podCreationTimestamp="2026-01-22 10:40:11 +0000 UTC" firstStartedPulling="2026-01-22 10:40:13.435100033 +0000 UTC m=+892.665042941" lastFinishedPulling="2026-01-22 10:40:43.597076036 +0000 UTC m=+922.827018944" observedRunningTime="2026-01-22 10:40:45.80041548 +0000 UTC m=+925.030358408" watchObservedRunningTime="2026-01-22 10:40:45.8011489 +0000 UTC m=+925.031091808" Jan 22 10:40:45 crc kubenswrapper[4752]: I0122 10:40:45.830060 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-7vg6x" podStartSLOduration=4.718674072 podStartE2EDuration="34.830039959s" podCreationTimestamp="2026-01-22 10:40:11 +0000 UTC" firstStartedPulling="2026-01-22 10:40:13.45757739 +0000 UTC m=+892.687520298" lastFinishedPulling="2026-01-22 10:40:43.568943277 +0000 UTC m=+922.798886185" observedRunningTime="2026-01-22 10:40:45.828351675 +0000 UTC m=+925.058294583" watchObservedRunningTime="2026-01-22 10:40:45.830039959 +0000 UTC m=+925.059982867" Jan 22 10:40:46 crc kubenswrapper[4752]: I0122 10:40:46.302733 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-fd964b9dd-kvpr7" event={"ID":"7fd4a737-55cc-447b-a14a-e5f46b1b392d","Type":"ContainerStarted","Data":"eddd1a1bcfb17cac0df617494c0afcc31de318de7029cf3f19434dfe18c40aec"} Jan 22 10:40:46 crc kubenswrapper[4752]: I0122 10:40:46.302776 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-fd964b9dd-kvpr7" event={"ID":"7fd4a737-55cc-447b-a14a-e5f46b1b392d","Type":"ContainerStarted","Data":"75307dcd70edb3053adbbd9a4a39004e5d0913ca973ee49773d9abb5f884d369"} Jan 22 10:40:46 crc kubenswrapper[4752]: I0122 10:40:46.303653 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-fd964b9dd-kvpr7" Jan 22 10:40:46 crc kubenswrapper[4752]: I0122 10:40:46.306302 4752 generic.go:334] "Generic (PLEG): container finished" podID="0146b151-dc32-4af8-810f-f27edca3959e" containerID="e501dd763f1d3448d68ba8eeb8a4a984e34193b81d8437ff0ab8cb2ace80acf8" exitCode=0 Jan 22 10:40:46 crc kubenswrapper[4752]: I0122 10:40:46.306491 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcsm9" event={"ID":"0146b151-dc32-4af8-810f-f27edca3959e","Type":"ContainerDied","Data":"e501dd763f1d3448d68ba8eeb8a4a984e34193b81d8437ff0ab8cb2ace80acf8"} Jan 22 10:40:46 crc kubenswrapper[4752]: I0122 10:40:46.310065 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-j2fbt" event={"ID":"d60c858d-c3bc-4088-96c8-2fb5d865826a","Type":"ContainerStarted","Data":"4e9401e379d2e2ab91e2283e9b4565052522dc8a703dedc7a91c4d0e3365ea66"} Jan 22 10:40:46 crc kubenswrapper[4752]: I0122 10:40:46.310373 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-j2fbt" Jan 22 10:40:46 crc kubenswrapper[4752]: I0122 10:40:46.348425 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-fd964b9dd-kvpr7" podStartSLOduration=34.348406952 podStartE2EDuration="34.348406952s" podCreationTimestamp="2026-01-22 10:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:40:46.3475736 +0000 UTC m=+925.577516508" watchObservedRunningTime="2026-01-22 10:40:46.348406952 +0000 UTC m=+925.578349860" Jan 22 10:40:46 crc kubenswrapper[4752]: I0122 10:40:46.382954 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-j2fbt" podStartSLOduration=3.842176599 podStartE2EDuration="35.382936339s" podCreationTimestamp="2026-01-22 10:40:11 +0000 UTC" firstStartedPulling="2026-01-22 10:40:13.334056775 +0000 UTC m=+892.563999683" lastFinishedPulling="2026-01-22 10:40:44.874816515 +0000 UTC m=+924.104759423" observedRunningTime="2026-01-22 10:40:46.381474471 +0000 UTC m=+925.611417379" watchObservedRunningTime="2026-01-22 10:40:46.382936339 +0000 UTC m=+925.612879247" Jan 22 10:40:47 crc kubenswrapper[4752]: I0122 10:40:47.344753 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-jqfns" event={"ID":"af9949b6-e7f5-409e-8d78-012a9298233b","Type":"ContainerStarted","Data":"b7a16401b59e7295816ea8c2881957e47217bc4ef1858400cd4ca36ef782ac2c"} Jan 22 10:40:47 crc kubenswrapper[4752]: I0122 10:40:47.345941 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-jqfns" Jan 22 10:40:47 crc kubenswrapper[4752]: I0122 10:40:47.357644 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcsm9" event={"ID":"0146b151-dc32-4af8-810f-f27edca3959e","Type":"ContainerStarted","Data":"6025539024d83ffe0e1e1c06e11c50f37cb5e932be30aa98113152bc20f92547"} Jan 22 10:40:47 crc kubenswrapper[4752]: I0122 10:40:47.369266 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-jqfns" podStartSLOduration=3.120152718 podStartE2EDuration="36.369249559s" podCreationTimestamp="2026-01-22 10:40:11 +0000 UTC" firstStartedPulling="2026-01-22 10:40:13.328787432 +0000 UTC m=+892.558730340" lastFinishedPulling="2026-01-22 10:40:46.577884273 +0000 UTC m=+925.807827181" observedRunningTime="2026-01-22 10:40:47.364245388 +0000 UTC m=+926.594188296" watchObservedRunningTime="2026-01-22 10:40:47.369249559 +0000 UTC m=+926.599192467" Jan 22 10:40:48 crc kubenswrapper[4752]: I0122 10:40:48.387473 4752 generic.go:334] "Generic (PLEG): container finished" podID="0146b151-dc32-4af8-810f-f27edca3959e" containerID="6025539024d83ffe0e1e1c06e11c50f37cb5e932be30aa98113152bc20f92547" exitCode=0 Jan 22 10:40:48 crc kubenswrapper[4752]: I0122 10:40:48.389244 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcsm9" event={"ID":"0146b151-dc32-4af8-810f-f27edca3959e","Type":"ContainerDied","Data":"6025539024d83ffe0e1e1c06e11c50f37cb5e932be30aa98113152bc20f92547"} Jan 22 10:40:48 crc kubenswrapper[4752]: I0122 10:40:48.469179 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g99jj"] Jan 22 10:40:48 crc kubenswrapper[4752]: I0122 10:40:48.484777 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g99jj"] Jan 22 10:40:48 crc kubenswrapper[4752]: I0122 10:40:48.485017 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g99jj" Jan 22 10:40:48 crc kubenswrapper[4752]: I0122 10:40:48.649179 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj4vb\" (UniqueName: \"kubernetes.io/projected/fbb6b684-df47-4377-81da-9af71f3b3865-kube-api-access-sj4vb\") pod \"redhat-marketplace-g99jj\" (UID: \"fbb6b684-df47-4377-81da-9af71f3b3865\") " pod="openshift-marketplace/redhat-marketplace-g99jj" Jan 22 10:40:48 crc kubenswrapper[4752]: I0122 10:40:48.650418 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbb6b684-df47-4377-81da-9af71f3b3865-catalog-content\") pod \"redhat-marketplace-g99jj\" (UID: \"fbb6b684-df47-4377-81da-9af71f3b3865\") " pod="openshift-marketplace/redhat-marketplace-g99jj" Jan 22 10:40:48 crc kubenswrapper[4752]: I0122 10:40:48.650584 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbb6b684-df47-4377-81da-9af71f3b3865-utilities\") pod \"redhat-marketplace-g99jj\" (UID: \"fbb6b684-df47-4377-81da-9af71f3b3865\") " pod="openshift-marketplace/redhat-marketplace-g99jj" Jan 22 10:40:48 crc kubenswrapper[4752]: I0122 10:40:48.753259 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbb6b684-df47-4377-81da-9af71f3b3865-catalog-content\") pod \"redhat-marketplace-g99jj\" (UID: \"fbb6b684-df47-4377-81da-9af71f3b3865\") " pod="openshift-marketplace/redhat-marketplace-g99jj" Jan 22 10:40:48 crc kubenswrapper[4752]: I0122 10:40:48.753344 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbb6b684-df47-4377-81da-9af71f3b3865-utilities\") pod \"redhat-marketplace-g99jj\" (UID: \"fbb6b684-df47-4377-81da-9af71f3b3865\") " pod="openshift-marketplace/redhat-marketplace-g99jj" Jan 22 10:40:48 crc kubenswrapper[4752]: I0122 10:40:48.753410 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj4vb\" (UniqueName: \"kubernetes.io/projected/fbb6b684-df47-4377-81da-9af71f3b3865-kube-api-access-sj4vb\") pod \"redhat-marketplace-g99jj\" (UID: \"fbb6b684-df47-4377-81da-9af71f3b3865\") " pod="openshift-marketplace/redhat-marketplace-g99jj" Jan 22 10:40:48 crc kubenswrapper[4752]: I0122 10:40:48.754024 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbb6b684-df47-4377-81da-9af71f3b3865-catalog-content\") pod \"redhat-marketplace-g99jj\" (UID: \"fbb6b684-df47-4377-81da-9af71f3b3865\") " pod="openshift-marketplace/redhat-marketplace-g99jj" Jan 22 10:40:48 crc kubenswrapper[4752]: I0122 10:40:48.754134 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbb6b684-df47-4377-81da-9af71f3b3865-utilities\") pod \"redhat-marketplace-g99jj\" (UID: \"fbb6b684-df47-4377-81da-9af71f3b3865\") " pod="openshift-marketplace/redhat-marketplace-g99jj" Jan 22 10:40:48 crc kubenswrapper[4752]: I0122 10:40:48.785742 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj4vb\" (UniqueName: \"kubernetes.io/projected/fbb6b684-df47-4377-81da-9af71f3b3865-kube-api-access-sj4vb\") pod \"redhat-marketplace-g99jj\" (UID: \"fbb6b684-df47-4377-81da-9af71f3b3865\") " pod="openshift-marketplace/redhat-marketplace-g99jj" Jan 22 10:40:48 crc kubenswrapper[4752]: I0122 10:40:48.839182 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g99jj" Jan 22 10:40:51 crc kubenswrapper[4752]: I0122 10:40:51.869230 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nw62c" Jan 22 10:40:51 crc kubenswrapper[4752]: I0122 10:40:51.872236 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-5w4l6" Jan 22 10:40:51 crc kubenswrapper[4752]: I0122 10:40:51.897249 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-j2fbt" Jan 22 10:40:51 crc kubenswrapper[4752]: I0122 10:40:51.953587 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-qtrfq" Jan 22 10:40:51 crc kubenswrapper[4752]: I0122 10:40:51.975469 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-kzvxf" Jan 22 10:40:52 crc kubenswrapper[4752]: I0122 10:40:52.045694 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-jqfns" Jan 22 10:40:52 crc kubenswrapper[4752]: I0122 10:40:52.073226 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qmpws" Jan 22 10:40:52 crc kubenswrapper[4752]: I0122 10:40:52.176490 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-pqzp4" Jan 22 10:40:52 crc kubenswrapper[4752]: I0122 10:40:52.187649 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-knf8v" Jan 22 10:40:52 crc kubenswrapper[4752]: I0122 10:40:52.275080 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-5ks2h" Jan 22 10:40:52 crc kubenswrapper[4752]: I0122 10:40:52.352049 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xvznb" Jan 22 10:40:52 crc kubenswrapper[4752]: I0122 10:40:52.377481 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-xrrtq" Jan 22 10:40:52 crc kubenswrapper[4752]: I0122 10:40:52.377556 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-p5srq" Jan 22 10:40:52 crc kubenswrapper[4752]: I0122 10:40:52.413078 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-7vg6x" Jan 22 10:40:52 crc kubenswrapper[4752]: I0122 10:40:52.590160 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-rpf8x" Jan 22 10:40:52 crc kubenswrapper[4752]: I0122 10:40:52.633759 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-8dc8cff97-5ff28" Jan 22 10:40:54 crc kubenswrapper[4752]: I0122 10:40:54.557666 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-fd964b9dd-kvpr7" Jan 22 10:41:08 crc kubenswrapper[4752]: I0122 10:41:08.557464 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6b5d9f997d-clxq7" event={"ID":"2d5d5722-0553-4d40-b618-a1c6d2e9f727","Type":"ContainerStarted","Data":"7665346b225271533e5f40ccad352a7f8da051df9b18f1252d0cc5107eaa2291"} Jan 22 10:41:08 crc kubenswrapper[4752]: I0122 10:41:08.558779 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6b5d9f997d-clxq7" Jan 22 10:41:08 crc kubenswrapper[4752]: I0122 10:41:08.560243 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcsm9" event={"ID":"0146b151-dc32-4af8-810f-f27edca3959e","Type":"ContainerStarted","Data":"bc2d56c1950fa9fd6f11cd67c00cdb3a3e668791f9465eb50f2ab197684b4eea"} Jan 22 10:41:08 crc kubenswrapper[4752]: I0122 10:41:08.561688 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-6pt5z" event={"ID":"79910a26-5ba2-4244-8880-31bd1322aec0","Type":"ContainerStarted","Data":"3237119b909fce87d0f54b9825379156bcc2c9c55cc9b23651263a19edbd95dc"} Jan 22 10:41:08 crc kubenswrapper[4752]: I0122 10:41:08.562025 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-6pt5z" Jan 22 10:41:08 crc kubenswrapper[4752]: I0122 10:41:08.563158 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-62w75" event={"ID":"edb8c589-c1ae-4438-b32d-2dcdfec470f6","Type":"ContainerStarted","Data":"41d92532d80b79eac6aec10e74142728f005a5e2d493e0ace2e86013eebdf213"} Jan 22 10:41:08 crc kubenswrapper[4752]: I0122 10:41:08.563473 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-62w75" Jan 22 10:41:08 crc kubenswrapper[4752]: I0122 10:41:08.564587 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-2vmkm" event={"ID":"12ce9837-825a-4ee8-a0b7-6d65a6c1766c","Type":"ContainerStarted","Data":"2b79bed9084f18063e7ee4bcddd4fece2ab9ac822b92c92542e795e39819e0db"} Jan 22 10:41:08 crc kubenswrapper[4752]: I0122 10:41:08.564898 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-2vmkm" Jan 22 10:41:08 crc kubenswrapper[4752]: I0122 10:41:08.566169 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bthdw" event={"ID":"ffb1d469-6623-4086-a40b-66f153b47bcf","Type":"ContainerStarted","Data":"b843403988cdb9bafb0cb5036d8a0bf50dfa65e13f0099a4b74641b70cd365ed"} Jan 22 10:41:08 crc kubenswrapper[4752]: I0122 10:41:08.566484 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bthdw" Jan 22 10:41:08 crc kubenswrapper[4752]: I0122 10:41:08.598132 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-6b5d9f997d-clxq7" podStartSLOduration=33.017450094 podStartE2EDuration="57.598115233s" podCreationTimestamp="2026-01-22 10:40:11 +0000 UTC" firstStartedPulling="2026-01-22 10:40:43.575105239 +0000 UTC m=+922.805048147" lastFinishedPulling="2026-01-22 10:41:08.155770338 +0000 UTC m=+947.385713286" observedRunningTime="2026-01-22 10:41:08.594699403 +0000 UTC m=+947.824642301" watchObservedRunningTime="2026-01-22 10:41:08.598115233 +0000 UTC m=+947.828058141" Jan 22 10:41:08 crc kubenswrapper[4752]: I0122 10:41:08.619089 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g99jj"] Jan 22 10:41:08 crc kubenswrapper[4752]: I0122 10:41:08.629269 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-62w75" podStartSLOduration=2.973015642 podStartE2EDuration="57.629222s" podCreationTimestamp="2026-01-22 10:40:11 +0000 UTC" firstStartedPulling="2026-01-22 10:40:13.340947151 +0000 UTC m=+892.570890049" lastFinishedPulling="2026-01-22 10:41:07.997153459 +0000 UTC m=+947.227096407" observedRunningTime="2026-01-22 10:41:08.626601961 +0000 UTC m=+947.856544869" watchObservedRunningTime="2026-01-22 10:41:08.629222 +0000 UTC m=+947.859164908" Jan 22 10:41:08 crc kubenswrapper[4752]: I0122 10:41:08.697621 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bthdw" podStartSLOduration=33.933096416 podStartE2EDuration="57.697602447s" podCreationTimestamp="2026-01-22 10:40:11 +0000 UTC" firstStartedPulling="2026-01-22 10:40:44.23158054 +0000 UTC m=+923.461523448" lastFinishedPulling="2026-01-22 10:41:07.996086531 +0000 UTC m=+947.226029479" observedRunningTime="2026-01-22 10:41:08.693934381 +0000 UTC m=+947.923877309" watchObservedRunningTime="2026-01-22 10:41:08.697602447 +0000 UTC m=+947.927545355" Jan 22 10:41:08 crc kubenswrapper[4752]: I0122 10:41:08.700819 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-2vmkm" podStartSLOduration=2.900272609 podStartE2EDuration="57.700810032s" podCreationTimestamp="2026-01-22 10:40:11 +0000 UTC" firstStartedPulling="2026-01-22 10:40:13.352903214 +0000 UTC m=+892.582846122" lastFinishedPulling="2026-01-22 10:41:08.153440606 +0000 UTC m=+947.383383545" observedRunningTime="2026-01-22 10:41:08.657395141 +0000 UTC m=+947.887338049" watchObservedRunningTime="2026-01-22 10:41:08.700810032 +0000 UTC m=+947.930752940" Jan 22 10:41:08 crc kubenswrapper[4752]: I0122 10:41:08.736198 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-6pt5z" podStartSLOduration=2.956044399 podStartE2EDuration="57.73617576s" podCreationTimestamp="2026-01-22 10:40:11 +0000 UTC" firstStartedPulling="2026-01-22 10:40:13.333073478 +0000 UTC m=+892.563016386" lastFinishedPulling="2026-01-22 10:41:08.113204799 +0000 UTC m=+947.343147747" observedRunningTime="2026-01-22 10:41:08.733170391 +0000 UTC m=+947.963113299" watchObservedRunningTime="2026-01-22 10:41:08.73617576 +0000 UTC m=+947.966118668" Jan 22 10:41:09 crc kubenswrapper[4752]: I0122 10:41:09.574886 4752 generic.go:334] "Generic (PLEG): container finished" podID="fbb6b684-df47-4377-81da-9af71f3b3865" containerID="96dc8c6aee171b698ff232287275e9951152b46e4f0696c2fae12ba723566561" exitCode=0 Jan 22 10:41:09 crc kubenswrapper[4752]: I0122 10:41:09.575023 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g99jj" event={"ID":"fbb6b684-df47-4377-81da-9af71f3b3865","Type":"ContainerDied","Data":"96dc8c6aee171b698ff232287275e9951152b46e4f0696c2fae12ba723566561"} Jan 22 10:41:09 crc kubenswrapper[4752]: I0122 10:41:09.575088 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g99jj" event={"ID":"fbb6b684-df47-4377-81da-9af71f3b3865","Type":"ContainerStarted","Data":"0286bceeeadae58e4b874985de25ce7918ea72fe862400407a3357e101a6004c"} Jan 22 10:41:09 crc kubenswrapper[4752]: I0122 10:41:09.597722 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xcsm9" podStartSLOduration=4.753032715 podStartE2EDuration="26.597702331s" podCreationTimestamp="2026-01-22 10:40:43 +0000 UTC" firstStartedPulling="2026-01-22 10:40:46.307281651 +0000 UTC m=+925.537224559" lastFinishedPulling="2026-01-22 10:41:08.151951257 +0000 UTC m=+947.381894175" observedRunningTime="2026-01-22 10:41:08.759916604 +0000 UTC m=+947.989859532" watchObservedRunningTime="2026-01-22 10:41:09.597702331 +0000 UTC m=+948.827645239" Jan 22 10:41:10 crc kubenswrapper[4752]: I0122 10:41:10.585468 4752 generic.go:334] "Generic (PLEG): container finished" podID="fbb6b684-df47-4377-81da-9af71f3b3865" containerID="630c261a5c182b3aa22e183987e574d0b442cdd8680bfefaccbd78ad462e8660" exitCode=0 Jan 22 10:41:10 crc kubenswrapper[4752]: I0122 10:41:10.585550 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g99jj" event={"ID":"fbb6b684-df47-4377-81da-9af71f3b3865","Type":"ContainerDied","Data":"630c261a5c182b3aa22e183987e574d0b442cdd8680bfefaccbd78ad462e8660"} Jan 22 10:41:11 crc kubenswrapper[4752]: I0122 10:41:11.596541 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g99jj" event={"ID":"fbb6b684-df47-4377-81da-9af71f3b3865","Type":"ContainerStarted","Data":"8678bc261de2fd842efbeb3c120662816c840993211370c319b44db71db6e457"} Jan 22 10:41:11 crc kubenswrapper[4752]: I0122 10:41:11.612690 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g99jj" podStartSLOduration=22.166283424 podStartE2EDuration="23.612671146s" podCreationTimestamp="2026-01-22 10:40:48 +0000 UTC" firstStartedPulling="2026-01-22 10:41:09.576321669 +0000 UTC m=+948.806264597" lastFinishedPulling="2026-01-22 10:41:11.022709411 +0000 UTC m=+950.252652319" observedRunningTime="2026-01-22 10:41:11.612577363 +0000 UTC m=+950.842520291" watchObservedRunningTime="2026-01-22 10:41:11.612671146 +0000 UTC m=+950.842614054" Jan 22 10:41:13 crc kubenswrapper[4752]: I0122 10:41:13.611832 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xcsm9" Jan 22 10:41:13 crc kubenswrapper[4752]: I0122 10:41:13.612223 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xcsm9" Jan 22 10:41:13 crc kubenswrapper[4752]: I0122 10:41:13.658558 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xcsm9" Jan 22 10:41:14 crc kubenswrapper[4752]: I0122 10:41:14.670034 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xcsm9" Jan 22 10:41:14 crc kubenswrapper[4752]: I0122 10:41:14.895472 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xcsm9"] Jan 22 10:41:16 crc kubenswrapper[4752]: I0122 10:41:16.636412 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xcsm9" podUID="0146b151-dc32-4af8-810f-f27edca3959e" containerName="registry-server" containerID="cri-o://bc2d56c1950fa9fd6f11cd67c00cdb3a3e668791f9465eb50f2ab197684b4eea" gracePeriod=2 Jan 22 10:41:17 crc kubenswrapper[4752]: I0122 10:41:17.921732 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6b5d9f997d-clxq7" Jan 22 10:41:18 crc kubenswrapper[4752]: I0122 10:41:18.238288 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bthdw" Jan 22 10:41:18 crc kubenswrapper[4752]: I0122 10:41:18.659566 4752 generic.go:334] "Generic (PLEG): container finished" podID="0146b151-dc32-4af8-810f-f27edca3959e" containerID="bc2d56c1950fa9fd6f11cd67c00cdb3a3e668791f9465eb50f2ab197684b4eea" exitCode=0 Jan 22 10:41:18 crc kubenswrapper[4752]: I0122 10:41:18.659627 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcsm9" event={"ID":"0146b151-dc32-4af8-810f-f27edca3959e","Type":"ContainerDied","Data":"bc2d56c1950fa9fd6f11cd67c00cdb3a3e668791f9465eb50f2ab197684b4eea"} Jan 22 10:41:18 crc kubenswrapper[4752]: I0122 10:41:18.839579 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g99jj" Jan 22 10:41:18 crc kubenswrapper[4752]: I0122 10:41:18.839864 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g99jj" Jan 22 10:41:18 crc kubenswrapper[4752]: I0122 10:41:18.895563 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g99jj" Jan 22 10:41:19 crc kubenswrapper[4752]: I0122 10:41:19.117435 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcsm9" Jan 22 10:41:19 crc kubenswrapper[4752]: I0122 10:41:19.232695 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0146b151-dc32-4af8-810f-f27edca3959e-utilities\") pod \"0146b151-dc32-4af8-810f-f27edca3959e\" (UID: \"0146b151-dc32-4af8-810f-f27edca3959e\") " Jan 22 10:41:19 crc kubenswrapper[4752]: I0122 10:41:19.232972 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czs6d\" (UniqueName: \"kubernetes.io/projected/0146b151-dc32-4af8-810f-f27edca3959e-kube-api-access-czs6d\") pod \"0146b151-dc32-4af8-810f-f27edca3959e\" (UID: \"0146b151-dc32-4af8-810f-f27edca3959e\") " Jan 22 10:41:19 crc kubenswrapper[4752]: I0122 10:41:19.233126 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0146b151-dc32-4af8-810f-f27edca3959e-catalog-content\") pod \"0146b151-dc32-4af8-810f-f27edca3959e\" (UID: \"0146b151-dc32-4af8-810f-f27edca3959e\") " Jan 22 10:41:19 crc kubenswrapper[4752]: I0122 10:41:19.233570 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0146b151-dc32-4af8-810f-f27edca3959e-utilities" (OuterVolumeSpecName: "utilities") pod "0146b151-dc32-4af8-810f-f27edca3959e" (UID: "0146b151-dc32-4af8-810f-f27edca3959e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:41:19 crc kubenswrapper[4752]: I0122 10:41:19.233976 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0146b151-dc32-4af8-810f-f27edca3959e-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:41:19 crc kubenswrapper[4752]: I0122 10:41:19.241038 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0146b151-dc32-4af8-810f-f27edca3959e-kube-api-access-czs6d" (OuterVolumeSpecName: "kube-api-access-czs6d") pod "0146b151-dc32-4af8-810f-f27edca3959e" (UID: "0146b151-dc32-4af8-810f-f27edca3959e"). InnerVolumeSpecName "kube-api-access-czs6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:41:19 crc kubenswrapper[4752]: I0122 10:41:19.278770 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0146b151-dc32-4af8-810f-f27edca3959e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0146b151-dc32-4af8-810f-f27edca3959e" (UID: "0146b151-dc32-4af8-810f-f27edca3959e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:41:19 crc kubenswrapper[4752]: I0122 10:41:19.335354 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czs6d\" (UniqueName: \"kubernetes.io/projected/0146b151-dc32-4af8-810f-f27edca3959e-kube-api-access-czs6d\") on node \"crc\" DevicePath \"\"" Jan 22 10:41:19 crc kubenswrapper[4752]: I0122 10:41:19.335393 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0146b151-dc32-4af8-810f-f27edca3959e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:41:19 crc kubenswrapper[4752]: I0122 10:41:19.671589 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcsm9" event={"ID":"0146b151-dc32-4af8-810f-f27edca3959e","Type":"ContainerDied","Data":"3ff22818b938e4b0378de03132b87337274c372460c73187318fac9e0b26d4d8"} Jan 22 10:41:19 crc kubenswrapper[4752]: I0122 10:41:19.672947 4752 scope.go:117] "RemoveContainer" containerID="bc2d56c1950fa9fd6f11cd67c00cdb3a3e668791f9465eb50f2ab197684b4eea" Jan 22 10:41:19 crc kubenswrapper[4752]: I0122 10:41:19.671627 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcsm9" Jan 22 10:41:19 crc kubenswrapper[4752]: I0122 10:41:19.708921 4752 scope.go:117] "RemoveContainer" containerID="6025539024d83ffe0e1e1c06e11c50f37cb5e932be30aa98113152bc20f92547" Jan 22 10:41:19 crc kubenswrapper[4752]: I0122 10:41:19.722893 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xcsm9"] Jan 22 10:41:19 crc kubenswrapper[4752]: I0122 10:41:19.732987 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xcsm9"] Jan 22 10:41:19 crc kubenswrapper[4752]: I0122 10:41:19.740848 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g99jj" Jan 22 10:41:19 crc kubenswrapper[4752]: I0122 10:41:19.750143 4752 scope.go:117] "RemoveContainer" containerID="e501dd763f1d3448d68ba8eeb8a4a984e34193b81d8437ff0ab8cb2ace80acf8" Jan 22 10:41:21 crc kubenswrapper[4752]: I0122 10:41:21.108315 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0146b151-dc32-4af8-810f-f27edca3959e" path="/var/lib/kubelet/pods/0146b151-dc32-4af8-810f-f27edca3959e/volumes" Jan 22 10:41:21 crc kubenswrapper[4752]: I0122 10:41:21.984639 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-62w75" Jan 22 10:41:22 crc kubenswrapper[4752]: I0122 10:41:22.028339 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-2vmkm" Jan 22 10:41:22 crc kubenswrapper[4752]: I0122 10:41:22.084303 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-6pt5z" Jan 22 10:41:22 crc kubenswrapper[4752]: I0122 10:41:22.086800 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g99jj"] Jan 22 10:41:22 crc kubenswrapper[4752]: I0122 10:41:22.696634 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g99jj" podUID="fbb6b684-df47-4377-81da-9af71f3b3865" containerName="registry-server" containerID="cri-o://8678bc261de2fd842efbeb3c120662816c840993211370c319b44db71db6e457" gracePeriod=2 Jan 22 10:41:23 crc kubenswrapper[4752]: I0122 10:41:23.135386 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g99jj" Jan 22 10:41:23 crc kubenswrapper[4752]: I0122 10:41:23.193488 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj4vb\" (UniqueName: \"kubernetes.io/projected/fbb6b684-df47-4377-81da-9af71f3b3865-kube-api-access-sj4vb\") pod \"fbb6b684-df47-4377-81da-9af71f3b3865\" (UID: \"fbb6b684-df47-4377-81da-9af71f3b3865\") " Jan 22 10:41:23 crc kubenswrapper[4752]: I0122 10:41:23.193596 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbb6b684-df47-4377-81da-9af71f3b3865-utilities\") pod \"fbb6b684-df47-4377-81da-9af71f3b3865\" (UID: \"fbb6b684-df47-4377-81da-9af71f3b3865\") " Jan 22 10:41:23 crc kubenswrapper[4752]: I0122 10:41:23.193674 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbb6b684-df47-4377-81da-9af71f3b3865-catalog-content\") pod \"fbb6b684-df47-4377-81da-9af71f3b3865\" (UID: \"fbb6b684-df47-4377-81da-9af71f3b3865\") " Jan 22 10:41:23 crc kubenswrapper[4752]: I0122 10:41:23.194486 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbb6b684-df47-4377-81da-9af71f3b3865-utilities" (OuterVolumeSpecName: "utilities") pod "fbb6b684-df47-4377-81da-9af71f3b3865" (UID: "fbb6b684-df47-4377-81da-9af71f3b3865"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:41:23 crc kubenswrapper[4752]: I0122 10:41:23.201745 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbb6b684-df47-4377-81da-9af71f3b3865-kube-api-access-sj4vb" (OuterVolumeSpecName: "kube-api-access-sj4vb") pod "fbb6b684-df47-4377-81da-9af71f3b3865" (UID: "fbb6b684-df47-4377-81da-9af71f3b3865"). InnerVolumeSpecName "kube-api-access-sj4vb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:41:23 crc kubenswrapper[4752]: I0122 10:41:23.212323 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbb6b684-df47-4377-81da-9af71f3b3865-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fbb6b684-df47-4377-81da-9af71f3b3865" (UID: "fbb6b684-df47-4377-81da-9af71f3b3865"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:41:23 crc kubenswrapper[4752]: I0122 10:41:23.295952 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbb6b684-df47-4377-81da-9af71f3b3865-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:41:23 crc kubenswrapper[4752]: I0122 10:41:23.295988 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbb6b684-df47-4377-81da-9af71f3b3865-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:41:23 crc kubenswrapper[4752]: I0122 10:41:23.296001 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sj4vb\" (UniqueName: \"kubernetes.io/projected/fbb6b684-df47-4377-81da-9af71f3b3865-kube-api-access-sj4vb\") on node \"crc\" DevicePath \"\"" Jan 22 10:41:23 crc kubenswrapper[4752]: I0122 10:41:23.706926 4752 generic.go:334] "Generic (PLEG): container finished" podID="fbb6b684-df47-4377-81da-9af71f3b3865" containerID="8678bc261de2fd842efbeb3c120662816c840993211370c319b44db71db6e457" exitCode=0 Jan 22 10:41:23 crc kubenswrapper[4752]: I0122 10:41:23.706973 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g99jj" event={"ID":"fbb6b684-df47-4377-81da-9af71f3b3865","Type":"ContainerDied","Data":"8678bc261de2fd842efbeb3c120662816c840993211370c319b44db71db6e457"} Jan 22 10:41:23 crc kubenswrapper[4752]: I0122 10:41:23.707006 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g99jj" event={"ID":"fbb6b684-df47-4377-81da-9af71f3b3865","Type":"ContainerDied","Data":"0286bceeeadae58e4b874985de25ce7918ea72fe862400407a3357e101a6004c"} Jan 22 10:41:23 crc kubenswrapper[4752]: I0122 10:41:23.707023 4752 scope.go:117] "RemoveContainer" containerID="8678bc261de2fd842efbeb3c120662816c840993211370c319b44db71db6e457" Jan 22 10:41:23 crc kubenswrapper[4752]: I0122 10:41:23.707032 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g99jj" Jan 22 10:41:23 crc kubenswrapper[4752]: I0122 10:41:23.725271 4752 scope.go:117] "RemoveContainer" containerID="630c261a5c182b3aa22e183987e574d0b442cdd8680bfefaccbd78ad462e8660" Jan 22 10:41:23 crc kubenswrapper[4752]: I0122 10:41:23.741432 4752 scope.go:117] "RemoveContainer" containerID="96dc8c6aee171b698ff232287275e9951152b46e4f0696c2fae12ba723566561" Jan 22 10:41:23 crc kubenswrapper[4752]: I0122 10:41:23.769473 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g99jj"] Jan 22 10:41:23 crc kubenswrapper[4752]: I0122 10:41:23.775175 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g99jj"] Jan 22 10:41:23 crc kubenswrapper[4752]: I0122 10:41:23.783284 4752 scope.go:117] "RemoveContainer" containerID="8678bc261de2fd842efbeb3c120662816c840993211370c319b44db71db6e457" Jan 22 10:41:23 crc kubenswrapper[4752]: E0122 10:41:23.784270 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8678bc261de2fd842efbeb3c120662816c840993211370c319b44db71db6e457\": container with ID starting with 8678bc261de2fd842efbeb3c120662816c840993211370c319b44db71db6e457 not found: ID does not exist" containerID="8678bc261de2fd842efbeb3c120662816c840993211370c319b44db71db6e457" Jan 22 10:41:23 crc kubenswrapper[4752]: I0122 10:41:23.784321 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8678bc261de2fd842efbeb3c120662816c840993211370c319b44db71db6e457"} err="failed to get container status \"8678bc261de2fd842efbeb3c120662816c840993211370c319b44db71db6e457\": rpc error: code = NotFound desc = could not find container \"8678bc261de2fd842efbeb3c120662816c840993211370c319b44db71db6e457\": container with ID starting with 8678bc261de2fd842efbeb3c120662816c840993211370c319b44db71db6e457 not found: ID does not exist" Jan 22 10:41:23 crc kubenswrapper[4752]: I0122 10:41:23.784357 4752 scope.go:117] "RemoveContainer" containerID="630c261a5c182b3aa22e183987e574d0b442cdd8680bfefaccbd78ad462e8660" Jan 22 10:41:23 crc kubenswrapper[4752]: E0122 10:41:23.784768 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"630c261a5c182b3aa22e183987e574d0b442cdd8680bfefaccbd78ad462e8660\": container with ID starting with 630c261a5c182b3aa22e183987e574d0b442cdd8680bfefaccbd78ad462e8660 not found: ID does not exist" containerID="630c261a5c182b3aa22e183987e574d0b442cdd8680bfefaccbd78ad462e8660" Jan 22 10:41:23 crc kubenswrapper[4752]: I0122 10:41:23.784793 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"630c261a5c182b3aa22e183987e574d0b442cdd8680bfefaccbd78ad462e8660"} err="failed to get container status \"630c261a5c182b3aa22e183987e574d0b442cdd8680bfefaccbd78ad462e8660\": rpc error: code = NotFound desc = could not find container \"630c261a5c182b3aa22e183987e574d0b442cdd8680bfefaccbd78ad462e8660\": container with ID starting with 630c261a5c182b3aa22e183987e574d0b442cdd8680bfefaccbd78ad462e8660 not found: ID does not exist" Jan 22 10:41:23 crc kubenswrapper[4752]: I0122 10:41:23.784808 4752 scope.go:117] "RemoveContainer" containerID="96dc8c6aee171b698ff232287275e9951152b46e4f0696c2fae12ba723566561" Jan 22 10:41:23 crc kubenswrapper[4752]: E0122 10:41:23.785207 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96dc8c6aee171b698ff232287275e9951152b46e4f0696c2fae12ba723566561\": container with ID starting with 96dc8c6aee171b698ff232287275e9951152b46e4f0696c2fae12ba723566561 not found: ID does not exist" containerID="96dc8c6aee171b698ff232287275e9951152b46e4f0696c2fae12ba723566561" Jan 22 10:41:23 crc kubenswrapper[4752]: I0122 10:41:23.785228 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96dc8c6aee171b698ff232287275e9951152b46e4f0696c2fae12ba723566561"} err="failed to get container status \"96dc8c6aee171b698ff232287275e9951152b46e4f0696c2fae12ba723566561\": rpc error: code = NotFound desc = could not find container \"96dc8c6aee171b698ff232287275e9951152b46e4f0696c2fae12ba723566561\": container with ID starting with 96dc8c6aee171b698ff232287275e9951152b46e4f0696c2fae12ba723566561 not found: ID does not exist" Jan 22 10:41:25 crc kubenswrapper[4752]: I0122 10:41:25.111905 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbb6b684-df47-4377-81da-9af71f3b3865" path="/var/lib/kubelet/pods/fbb6b684-df47-4377-81da-9af71f3b3865/volumes" Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.405146 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-548dc7894c-bwj9k"] Jan 22 10:41:42 crc kubenswrapper[4752]: E0122 10:41:42.406086 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbb6b684-df47-4377-81da-9af71f3b3865" containerName="extract-utilities" Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.406104 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbb6b684-df47-4377-81da-9af71f3b3865" containerName="extract-utilities" Jan 22 10:41:42 crc kubenswrapper[4752]: E0122 10:41:42.406124 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbb6b684-df47-4377-81da-9af71f3b3865" containerName="registry-server" Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.406131 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbb6b684-df47-4377-81da-9af71f3b3865" containerName="registry-server" Jan 22 10:41:42 crc kubenswrapper[4752]: E0122 10:41:42.406140 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0146b151-dc32-4af8-810f-f27edca3959e" containerName="extract-content" Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.406146 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="0146b151-dc32-4af8-810f-f27edca3959e" containerName="extract-content" Jan 22 10:41:42 crc kubenswrapper[4752]: E0122 10:41:42.406161 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0146b151-dc32-4af8-810f-f27edca3959e" containerName="registry-server" Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.406168 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="0146b151-dc32-4af8-810f-f27edca3959e" containerName="registry-server" Jan 22 10:41:42 crc kubenswrapper[4752]: E0122 10:41:42.406182 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbb6b684-df47-4377-81da-9af71f3b3865" containerName="extract-content" Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.406188 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbb6b684-df47-4377-81da-9af71f3b3865" containerName="extract-content" Jan 22 10:41:42 crc kubenswrapper[4752]: E0122 10:41:42.406200 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0146b151-dc32-4af8-810f-f27edca3959e" containerName="extract-utilities" Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.406208 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="0146b151-dc32-4af8-810f-f27edca3959e" containerName="extract-utilities" Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.406383 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="0146b151-dc32-4af8-810f-f27edca3959e" containerName="registry-server" Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.406406 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbb6b684-df47-4377-81da-9af71f3b3865" containerName="registry-server" Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.407302 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-548dc7894c-bwj9k" Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.409184 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.409420 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.410184 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.414925 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-wtwsz" Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.419834 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-548dc7894c-bwj9k"] Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.445612 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77b567bfc7-xnx87"] Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.446959 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77b567bfc7-xnx87" Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.452076 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.515470 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77b567bfc7-xnx87"] Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.542798 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9db0cbc1-d370-40dd-90e5-e419c1e5ad38-dns-svc\") pod \"dnsmasq-dns-77b567bfc7-xnx87\" (UID: \"9db0cbc1-d370-40dd-90e5-e419c1e5ad38\") " pod="openstack/dnsmasq-dns-77b567bfc7-xnx87" Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.543136 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9db0cbc1-d370-40dd-90e5-e419c1e5ad38-config\") pod \"dnsmasq-dns-77b567bfc7-xnx87\" (UID: \"9db0cbc1-d370-40dd-90e5-e419c1e5ad38\") " pod="openstack/dnsmasq-dns-77b567bfc7-xnx87" Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.543248 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/897f2843-ed58-4f28-9ab1-f8f9e6e50541-config\") pod \"dnsmasq-dns-548dc7894c-bwj9k\" (UID: \"897f2843-ed58-4f28-9ab1-f8f9e6e50541\") " pod="openstack/dnsmasq-dns-548dc7894c-bwj9k" Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.543385 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x2rl\" (UniqueName: \"kubernetes.io/projected/897f2843-ed58-4f28-9ab1-f8f9e6e50541-kube-api-access-7x2rl\") pod \"dnsmasq-dns-548dc7894c-bwj9k\" (UID: \"897f2843-ed58-4f28-9ab1-f8f9e6e50541\") " pod="openstack/dnsmasq-dns-548dc7894c-bwj9k" Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.543468 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8bp7\" (UniqueName: \"kubernetes.io/projected/9db0cbc1-d370-40dd-90e5-e419c1e5ad38-kube-api-access-w8bp7\") pod \"dnsmasq-dns-77b567bfc7-xnx87\" (UID: \"9db0cbc1-d370-40dd-90e5-e419c1e5ad38\") " pod="openstack/dnsmasq-dns-77b567bfc7-xnx87" Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.644893 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9db0cbc1-d370-40dd-90e5-e419c1e5ad38-dns-svc\") pod \"dnsmasq-dns-77b567bfc7-xnx87\" (UID: \"9db0cbc1-d370-40dd-90e5-e419c1e5ad38\") " pod="openstack/dnsmasq-dns-77b567bfc7-xnx87" Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.644947 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9db0cbc1-d370-40dd-90e5-e419c1e5ad38-config\") pod \"dnsmasq-dns-77b567bfc7-xnx87\" (UID: \"9db0cbc1-d370-40dd-90e5-e419c1e5ad38\") " pod="openstack/dnsmasq-dns-77b567bfc7-xnx87" Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.644976 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/897f2843-ed58-4f28-9ab1-f8f9e6e50541-config\") pod \"dnsmasq-dns-548dc7894c-bwj9k\" (UID: \"897f2843-ed58-4f28-9ab1-f8f9e6e50541\") " pod="openstack/dnsmasq-dns-548dc7894c-bwj9k" Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.645027 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x2rl\" (UniqueName: \"kubernetes.io/projected/897f2843-ed58-4f28-9ab1-f8f9e6e50541-kube-api-access-7x2rl\") pod \"dnsmasq-dns-548dc7894c-bwj9k\" (UID: \"897f2843-ed58-4f28-9ab1-f8f9e6e50541\") " pod="openstack/dnsmasq-dns-548dc7894c-bwj9k" Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.645046 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8bp7\" (UniqueName: \"kubernetes.io/projected/9db0cbc1-d370-40dd-90e5-e419c1e5ad38-kube-api-access-w8bp7\") pod \"dnsmasq-dns-77b567bfc7-xnx87\" (UID: \"9db0cbc1-d370-40dd-90e5-e419c1e5ad38\") " pod="openstack/dnsmasq-dns-77b567bfc7-xnx87" Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.646230 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9db0cbc1-d370-40dd-90e5-e419c1e5ad38-dns-svc\") pod \"dnsmasq-dns-77b567bfc7-xnx87\" (UID: \"9db0cbc1-d370-40dd-90e5-e419c1e5ad38\") " pod="openstack/dnsmasq-dns-77b567bfc7-xnx87" Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.646730 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9db0cbc1-d370-40dd-90e5-e419c1e5ad38-config\") pod \"dnsmasq-dns-77b567bfc7-xnx87\" (UID: \"9db0cbc1-d370-40dd-90e5-e419c1e5ad38\") " pod="openstack/dnsmasq-dns-77b567bfc7-xnx87" Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.647303 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/897f2843-ed58-4f28-9ab1-f8f9e6e50541-config\") pod \"dnsmasq-dns-548dc7894c-bwj9k\" (UID: \"897f2843-ed58-4f28-9ab1-f8f9e6e50541\") " pod="openstack/dnsmasq-dns-548dc7894c-bwj9k" Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.669245 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8bp7\" (UniqueName: \"kubernetes.io/projected/9db0cbc1-d370-40dd-90e5-e419c1e5ad38-kube-api-access-w8bp7\") pod \"dnsmasq-dns-77b567bfc7-xnx87\" (UID: \"9db0cbc1-d370-40dd-90e5-e419c1e5ad38\") " pod="openstack/dnsmasq-dns-77b567bfc7-xnx87" Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.670592 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x2rl\" (UniqueName: \"kubernetes.io/projected/897f2843-ed58-4f28-9ab1-f8f9e6e50541-kube-api-access-7x2rl\") pod \"dnsmasq-dns-548dc7894c-bwj9k\" (UID: \"897f2843-ed58-4f28-9ab1-f8f9e6e50541\") " pod="openstack/dnsmasq-dns-548dc7894c-bwj9k" Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.725624 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-548dc7894c-bwj9k" Jan 22 10:41:42 crc kubenswrapper[4752]: I0122 10:41:42.832378 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77b567bfc7-xnx87" Jan 22 10:41:43 crc kubenswrapper[4752]: I0122 10:41:43.172495 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-548dc7894c-bwj9k"] Jan 22 10:41:43 crc kubenswrapper[4752]: I0122 10:41:43.311876 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77b567bfc7-xnx87"] Jan 22 10:41:43 crc kubenswrapper[4752]: W0122 10:41:43.317876 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9db0cbc1_d370_40dd_90e5_e419c1e5ad38.slice/crio-1f9425b37896afc44c222b3cec7353fe3cd62d63d3493004d6c3ee63776e83bd WatchSource:0}: Error finding container 1f9425b37896afc44c222b3cec7353fe3cd62d63d3493004d6c3ee63776e83bd: Status 404 returned error can't find the container with id 1f9425b37896afc44c222b3cec7353fe3cd62d63d3493004d6c3ee63776e83bd Jan 22 10:41:43 crc kubenswrapper[4752]: I0122 10:41:43.915531 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-548dc7894c-bwj9k" event={"ID":"897f2843-ed58-4f28-9ab1-f8f9e6e50541","Type":"ContainerStarted","Data":"92191e692c1f8b68bc557ea65df44fdc60a6d4e1b4cd22c40e1cb4d496ddc4f8"} Jan 22 10:41:43 crc kubenswrapper[4752]: I0122 10:41:43.916743 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77b567bfc7-xnx87" event={"ID":"9db0cbc1-d370-40dd-90e5-e419c1e5ad38","Type":"ContainerStarted","Data":"1f9425b37896afc44c222b3cec7353fe3cd62d63d3493004d6c3ee63776e83bd"} Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.166546 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-548dc7894c-bwj9k"] Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.198554 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76c44b4bf7-jmmzr"] Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.199739 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c44b4bf7-jmmzr" Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.213847 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76c44b4bf7-jmmzr"] Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.313645 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql75k\" (UniqueName: \"kubernetes.io/projected/e374cd7c-feeb-428f-8181-2d909a962448-kube-api-access-ql75k\") pod \"dnsmasq-dns-76c44b4bf7-jmmzr\" (UID: \"e374cd7c-feeb-428f-8181-2d909a962448\") " pod="openstack/dnsmasq-dns-76c44b4bf7-jmmzr" Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.313730 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e374cd7c-feeb-428f-8181-2d909a962448-dns-svc\") pod \"dnsmasq-dns-76c44b4bf7-jmmzr\" (UID: \"e374cd7c-feeb-428f-8181-2d909a962448\") " pod="openstack/dnsmasq-dns-76c44b4bf7-jmmzr" Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.313818 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e374cd7c-feeb-428f-8181-2d909a962448-config\") pod \"dnsmasq-dns-76c44b4bf7-jmmzr\" (UID: \"e374cd7c-feeb-428f-8181-2d909a962448\") " pod="openstack/dnsmasq-dns-76c44b4bf7-jmmzr" Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.415608 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e374cd7c-feeb-428f-8181-2d909a962448-config\") pod \"dnsmasq-dns-76c44b4bf7-jmmzr\" (UID: \"e374cd7c-feeb-428f-8181-2d909a962448\") " pod="openstack/dnsmasq-dns-76c44b4bf7-jmmzr" Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.415679 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql75k\" (UniqueName: \"kubernetes.io/projected/e374cd7c-feeb-428f-8181-2d909a962448-kube-api-access-ql75k\") pod \"dnsmasq-dns-76c44b4bf7-jmmzr\" (UID: \"e374cd7c-feeb-428f-8181-2d909a962448\") " pod="openstack/dnsmasq-dns-76c44b4bf7-jmmzr" Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.415708 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e374cd7c-feeb-428f-8181-2d909a962448-dns-svc\") pod \"dnsmasq-dns-76c44b4bf7-jmmzr\" (UID: \"e374cd7c-feeb-428f-8181-2d909a962448\") " pod="openstack/dnsmasq-dns-76c44b4bf7-jmmzr" Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.416515 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e374cd7c-feeb-428f-8181-2d909a962448-dns-svc\") pod \"dnsmasq-dns-76c44b4bf7-jmmzr\" (UID: \"e374cd7c-feeb-428f-8181-2d909a962448\") " pod="openstack/dnsmasq-dns-76c44b4bf7-jmmzr" Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.416576 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e374cd7c-feeb-428f-8181-2d909a962448-config\") pod \"dnsmasq-dns-76c44b4bf7-jmmzr\" (UID: \"e374cd7c-feeb-428f-8181-2d909a962448\") " pod="openstack/dnsmasq-dns-76c44b4bf7-jmmzr" Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.442391 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql75k\" (UniqueName: \"kubernetes.io/projected/e374cd7c-feeb-428f-8181-2d909a962448-kube-api-access-ql75k\") pod \"dnsmasq-dns-76c44b4bf7-jmmzr\" (UID: \"e374cd7c-feeb-428f-8181-2d909a962448\") " pod="openstack/dnsmasq-dns-76c44b4bf7-jmmzr" Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.474647 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77b567bfc7-xnx87"] Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.517424 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7786d8fd7-9wgbn"] Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.518516 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7786d8fd7-9wgbn" Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.535125 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7786d8fd7-9wgbn"] Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.564690 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c44b4bf7-jmmzr" Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.621957 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83-config\") pod \"dnsmasq-dns-7786d8fd7-9wgbn\" (UID: \"bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83\") " pod="openstack/dnsmasq-dns-7786d8fd7-9wgbn" Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.622011 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbt2t\" (UniqueName: \"kubernetes.io/projected/bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83-kube-api-access-pbt2t\") pod \"dnsmasq-dns-7786d8fd7-9wgbn\" (UID: \"bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83\") " pod="openstack/dnsmasq-dns-7786d8fd7-9wgbn" Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.622052 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83-dns-svc\") pod \"dnsmasq-dns-7786d8fd7-9wgbn\" (UID: \"bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83\") " pod="openstack/dnsmasq-dns-7786d8fd7-9wgbn" Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.722894 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83-dns-svc\") pod \"dnsmasq-dns-7786d8fd7-9wgbn\" (UID: \"bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83\") " pod="openstack/dnsmasq-dns-7786d8fd7-9wgbn" Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.723018 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83-config\") pod \"dnsmasq-dns-7786d8fd7-9wgbn\" (UID: \"bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83\") " pod="openstack/dnsmasq-dns-7786d8fd7-9wgbn" Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.723040 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbt2t\" (UniqueName: \"kubernetes.io/projected/bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83-kube-api-access-pbt2t\") pod \"dnsmasq-dns-7786d8fd7-9wgbn\" (UID: \"bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83\") " pod="openstack/dnsmasq-dns-7786d8fd7-9wgbn" Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.723873 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83-dns-svc\") pod \"dnsmasq-dns-7786d8fd7-9wgbn\" (UID: \"bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83\") " pod="openstack/dnsmasq-dns-7786d8fd7-9wgbn" Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.724031 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83-config\") pod \"dnsmasq-dns-7786d8fd7-9wgbn\" (UID: \"bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83\") " pod="openstack/dnsmasq-dns-7786d8fd7-9wgbn" Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.751445 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbt2t\" (UniqueName: \"kubernetes.io/projected/bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83-kube-api-access-pbt2t\") pod \"dnsmasq-dns-7786d8fd7-9wgbn\" (UID: \"bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83\") " pod="openstack/dnsmasq-dns-7786d8fd7-9wgbn" Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.803097 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76c44b4bf7-jmmzr"] Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.825237 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-594b65fc49-fjdpq"] Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.826323 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594b65fc49-fjdpq" Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.836732 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-594b65fc49-fjdpq"] Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.846324 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7786d8fd7-9wgbn" Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.927730 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvwbc\" (UniqueName: \"kubernetes.io/projected/b07139f0-058d-4338-88f7-7e42a9aebeb6-kube-api-access-jvwbc\") pod \"dnsmasq-dns-594b65fc49-fjdpq\" (UID: \"b07139f0-058d-4338-88f7-7e42a9aebeb6\") " pod="openstack/dnsmasq-dns-594b65fc49-fjdpq" Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.927793 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b07139f0-058d-4338-88f7-7e42a9aebeb6-config\") pod \"dnsmasq-dns-594b65fc49-fjdpq\" (UID: \"b07139f0-058d-4338-88f7-7e42a9aebeb6\") " pod="openstack/dnsmasq-dns-594b65fc49-fjdpq" Jan 22 10:41:46 crc kubenswrapper[4752]: I0122 10:41:46.927812 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b07139f0-058d-4338-88f7-7e42a9aebeb6-dns-svc\") pod \"dnsmasq-dns-594b65fc49-fjdpq\" (UID: \"b07139f0-058d-4338-88f7-7e42a9aebeb6\") " pod="openstack/dnsmasq-dns-594b65fc49-fjdpq" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.029344 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvwbc\" (UniqueName: \"kubernetes.io/projected/b07139f0-058d-4338-88f7-7e42a9aebeb6-kube-api-access-jvwbc\") pod \"dnsmasq-dns-594b65fc49-fjdpq\" (UID: \"b07139f0-058d-4338-88f7-7e42a9aebeb6\") " pod="openstack/dnsmasq-dns-594b65fc49-fjdpq" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.029416 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b07139f0-058d-4338-88f7-7e42a9aebeb6-config\") pod \"dnsmasq-dns-594b65fc49-fjdpq\" (UID: \"b07139f0-058d-4338-88f7-7e42a9aebeb6\") " pod="openstack/dnsmasq-dns-594b65fc49-fjdpq" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.029465 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b07139f0-058d-4338-88f7-7e42a9aebeb6-dns-svc\") pod \"dnsmasq-dns-594b65fc49-fjdpq\" (UID: \"b07139f0-058d-4338-88f7-7e42a9aebeb6\") " pod="openstack/dnsmasq-dns-594b65fc49-fjdpq" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.030259 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b07139f0-058d-4338-88f7-7e42a9aebeb6-config\") pod \"dnsmasq-dns-594b65fc49-fjdpq\" (UID: \"b07139f0-058d-4338-88f7-7e42a9aebeb6\") " pod="openstack/dnsmasq-dns-594b65fc49-fjdpq" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.030376 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b07139f0-058d-4338-88f7-7e42a9aebeb6-dns-svc\") pod \"dnsmasq-dns-594b65fc49-fjdpq\" (UID: \"b07139f0-058d-4338-88f7-7e42a9aebeb6\") " pod="openstack/dnsmasq-dns-594b65fc49-fjdpq" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.071953 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvwbc\" (UniqueName: \"kubernetes.io/projected/b07139f0-058d-4338-88f7-7e42a9aebeb6-kube-api-access-jvwbc\") pod \"dnsmasq-dns-594b65fc49-fjdpq\" (UID: \"b07139f0-058d-4338-88f7-7e42a9aebeb6\") " pod="openstack/dnsmasq-dns-594b65fc49-fjdpq" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.156584 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594b65fc49-fjdpq" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.348591 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.352602 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.357736 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.358287 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.358439 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.358594 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.358777 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-bshlx" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.359048 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.359216 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.376686 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.442298 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-server-conf\") pod \"rabbitmq-server-0\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.442789 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-config-data\") pod \"rabbitmq-server-0\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.442890 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-pod-info\") pod \"rabbitmq-server-0\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.443080 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.443171 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.443211 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.443315 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.443645 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.443688 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.443726 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.443820 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhrj5\" (UniqueName: \"kubernetes.io/projected/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-kube-api-access-mhrj5\") pod \"rabbitmq-server-0\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.544783 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.544873 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.544929 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.544964 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.545010 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.545029 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.545049 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.545095 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhrj5\" (UniqueName: \"kubernetes.io/projected/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-kube-api-access-mhrj5\") pod \"rabbitmq-server-0\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.545119 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-server-conf\") pod \"rabbitmq-server-0\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.545136 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-config-data\") pod \"rabbitmq-server-0\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.545174 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-pod-info\") pod \"rabbitmq-server-0\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.545805 4752 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.546656 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.546950 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.547168 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.547331 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-config-data\") pod \"rabbitmq-server-0\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.548093 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-server-conf\") pod \"rabbitmq-server-0\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.548793 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.549287 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-pod-info\") pod \"rabbitmq-server-0\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.551490 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.559511 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.564879 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhrj5\" (UniqueName: \"kubernetes.io/projected/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-kube-api-access-mhrj5\") pod \"rabbitmq-server-0\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.567930 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.627440 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.628694 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.635165 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.635636 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-qwj9f" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.635698 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.635898 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.636040 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.637421 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.637637 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.643903 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.733004 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.753260 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9356406a-3c6e-4af1-a8bb-92244286ba39-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.753319 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9356406a-3c6e-4af1-a8bb-92244286ba39-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.753339 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9356406a-3c6e-4af1-a8bb-92244286ba39-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.753368 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9356406a-3c6e-4af1-a8bb-92244286ba39-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.753385 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9356406a-3c6e-4af1-a8bb-92244286ba39-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.753403 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9356406a-3c6e-4af1-a8bb-92244286ba39-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.753418 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4lbl\" (UniqueName: \"kubernetes.io/projected/9356406a-3c6e-4af1-a8bb-92244286ba39-kube-api-access-s4lbl\") pod \"rabbitmq-cell1-server-0\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.753438 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9356406a-3c6e-4af1-a8bb-92244286ba39-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.753465 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9356406a-3c6e-4af1-a8bb-92244286ba39-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.753518 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9356406a-3c6e-4af1-a8bb-92244286ba39-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.753537 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.854370 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9356406a-3c6e-4af1-a8bb-92244286ba39-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.854425 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9356406a-3c6e-4af1-a8bb-92244286ba39-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.854460 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9356406a-3c6e-4af1-a8bb-92244286ba39-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.854474 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9356406a-3c6e-4af1-a8bb-92244286ba39-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.854492 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9356406a-3c6e-4af1-a8bb-92244286ba39-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.854507 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4lbl\" (UniqueName: \"kubernetes.io/projected/9356406a-3c6e-4af1-a8bb-92244286ba39-kube-api-access-s4lbl\") pod \"rabbitmq-cell1-server-0\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.854529 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9356406a-3c6e-4af1-a8bb-92244286ba39-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.854559 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9356406a-3c6e-4af1-a8bb-92244286ba39-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.854613 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9356406a-3c6e-4af1-a8bb-92244286ba39-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.854633 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.854693 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9356406a-3c6e-4af1-a8bb-92244286ba39-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.854986 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9356406a-3c6e-4af1-a8bb-92244286ba39-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.855579 4752 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.855789 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9356406a-3c6e-4af1-a8bb-92244286ba39-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.856093 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9356406a-3c6e-4af1-a8bb-92244286ba39-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.857847 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9356406a-3c6e-4af1-a8bb-92244286ba39-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.858220 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9356406a-3c6e-4af1-a8bb-92244286ba39-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.859440 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9356406a-3c6e-4af1-a8bb-92244286ba39-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.865462 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9356406a-3c6e-4af1-a8bb-92244286ba39-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.873550 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9356406a-3c6e-4af1-a8bb-92244286ba39-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.875507 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9356406a-3c6e-4af1-a8bb-92244286ba39-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.877101 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.879583 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4lbl\" (UniqueName: \"kubernetes.io/projected/9356406a-3c6e-4af1-a8bb-92244286ba39-kube-api-access-s4lbl\") pod \"rabbitmq-cell1-server-0\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.937771 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.940008 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.944759 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-default-user" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.948760 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-erlang-cookie" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.948877 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-plugins-conf" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.949717 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-server-dockercfg-r88ts" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.949915 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-config-data" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.950100 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-notifications-svc" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.950370 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-server-conf" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.953642 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.956467 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/76dee6bc-ab39-4f6c-bc31-6ef18020e5f3-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.956528 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/76dee6bc-ab39-4f6c-bc31-6ef18020e5f3-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.956549 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/76dee6bc-ab39-4f6c-bc31-6ef18020e5f3-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.956597 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.956622 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/76dee6bc-ab39-4f6c-bc31-6ef18020e5f3-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.956641 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/76dee6bc-ab39-4f6c-bc31-6ef18020e5f3-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.956658 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb4bm\" (UniqueName: \"kubernetes.io/projected/76dee6bc-ab39-4f6c-bc31-6ef18020e5f3-kube-api-access-mb4bm\") pod \"rabbitmq-notifications-server-0\" (UID: \"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.956678 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76dee6bc-ab39-4f6c-bc31-6ef18020e5f3-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.956696 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/76dee6bc-ab39-4f6c-bc31-6ef18020e5f3-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.956721 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/76dee6bc-ab39-4f6c-bc31-6ef18020e5f3-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.956738 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/76dee6bc-ab39-4f6c-bc31-6ef18020e5f3-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:47 crc kubenswrapper[4752]: I0122 10:41:47.966362 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:41:48 crc kubenswrapper[4752]: I0122 10:41:48.059281 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:48 crc kubenswrapper[4752]: I0122 10:41:48.059434 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/76dee6bc-ab39-4f6c-bc31-6ef18020e5f3-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:48 crc kubenswrapper[4752]: I0122 10:41:48.059468 4752 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:48 crc kubenswrapper[4752]: I0122 10:41:48.059470 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/76dee6bc-ab39-4f6c-bc31-6ef18020e5f3-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:48 crc kubenswrapper[4752]: I0122 10:41:48.059526 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb4bm\" (UniqueName: \"kubernetes.io/projected/76dee6bc-ab39-4f6c-bc31-6ef18020e5f3-kube-api-access-mb4bm\") pod \"rabbitmq-notifications-server-0\" (UID: \"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:48 crc kubenswrapper[4752]: I0122 10:41:48.059592 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76dee6bc-ab39-4f6c-bc31-6ef18020e5f3-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:48 crc kubenswrapper[4752]: I0122 10:41:48.059627 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/76dee6bc-ab39-4f6c-bc31-6ef18020e5f3-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:48 crc kubenswrapper[4752]: I0122 10:41:48.059695 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/76dee6bc-ab39-4f6c-bc31-6ef18020e5f3-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:48 crc kubenswrapper[4752]: I0122 10:41:48.059727 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/76dee6bc-ab39-4f6c-bc31-6ef18020e5f3-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:48 crc kubenswrapper[4752]: I0122 10:41:48.059844 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/76dee6bc-ab39-4f6c-bc31-6ef18020e5f3-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:48 crc kubenswrapper[4752]: I0122 10:41:48.059989 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/76dee6bc-ab39-4f6c-bc31-6ef18020e5f3-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:48 crc kubenswrapper[4752]: I0122 10:41:48.060034 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/76dee6bc-ab39-4f6c-bc31-6ef18020e5f3-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:48 crc kubenswrapper[4752]: I0122 10:41:48.061066 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/76dee6bc-ab39-4f6c-bc31-6ef18020e5f3-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:48 crc kubenswrapper[4752]: I0122 10:41:48.061561 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/76dee6bc-ab39-4f6c-bc31-6ef18020e5f3-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:48 crc kubenswrapper[4752]: I0122 10:41:48.061656 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/76dee6bc-ab39-4f6c-bc31-6ef18020e5f3-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:48 crc kubenswrapper[4752]: I0122 10:41:48.061761 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/76dee6bc-ab39-4f6c-bc31-6ef18020e5f3-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:48 crc kubenswrapper[4752]: I0122 10:41:48.061879 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76dee6bc-ab39-4f6c-bc31-6ef18020e5f3-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:48 crc kubenswrapper[4752]: I0122 10:41:48.065655 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/76dee6bc-ab39-4f6c-bc31-6ef18020e5f3-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:48 crc kubenswrapper[4752]: I0122 10:41:48.068969 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/76dee6bc-ab39-4f6c-bc31-6ef18020e5f3-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:48 crc kubenswrapper[4752]: I0122 10:41:48.070676 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/76dee6bc-ab39-4f6c-bc31-6ef18020e5f3-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:48 crc kubenswrapper[4752]: I0122 10:41:48.070743 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/76dee6bc-ab39-4f6c-bc31-6ef18020e5f3-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:48 crc kubenswrapper[4752]: I0122 10:41:48.091346 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb4bm\" (UniqueName: \"kubernetes.io/projected/76dee6bc-ab39-4f6c-bc31-6ef18020e5f3-kube-api-access-mb4bm\") pod \"rabbitmq-notifications-server-0\" (UID: \"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:48 crc kubenswrapper[4752]: I0122 10:41:48.091635 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3\") " pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:48 crc kubenswrapper[4752]: I0122 10:41:48.266704 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:41:49 crc kubenswrapper[4752]: I0122 10:41:49.368012 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 22 10:41:49 crc kubenswrapper[4752]: I0122 10:41:49.375686 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 22 10:41:49 crc kubenswrapper[4752]: I0122 10:41:49.377305 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 22 10:41:49 crc kubenswrapper[4752]: I0122 10:41:49.380559 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 22 10:41:49 crc kubenswrapper[4752]: I0122 10:41:49.380582 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 22 10:41:49 crc kubenswrapper[4752]: I0122 10:41:49.383265 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-mhwbg" Jan 22 10:41:49 crc kubenswrapper[4752]: I0122 10:41:49.383323 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 22 10:41:49 crc kubenswrapper[4752]: I0122 10:41:49.393712 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 22 10:41:49 crc kubenswrapper[4752]: I0122 10:41:49.482278 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f64ed2cf-3432-43c8-a8cd-236c09d5adb3-config-data-default\") pod \"openstack-galera-0\" (UID: \"f64ed2cf-3432-43c8-a8cd-236c09d5adb3\") " pod="openstack/openstack-galera-0" Jan 22 10:41:49 crc kubenswrapper[4752]: I0122 10:41:49.482339 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f64ed2cf-3432-43c8-a8cd-236c09d5adb3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f64ed2cf-3432-43c8-a8cd-236c09d5adb3\") " pod="openstack/openstack-galera-0" Jan 22 10:41:49 crc kubenswrapper[4752]: I0122 10:41:49.482382 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrjmz\" (UniqueName: \"kubernetes.io/projected/f64ed2cf-3432-43c8-a8cd-236c09d5adb3-kube-api-access-mrjmz\") pod \"openstack-galera-0\" (UID: \"f64ed2cf-3432-43c8-a8cd-236c09d5adb3\") " pod="openstack/openstack-galera-0" Jan 22 10:41:49 crc kubenswrapper[4752]: I0122 10:41:49.482431 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f64ed2cf-3432-43c8-a8cd-236c09d5adb3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f64ed2cf-3432-43c8-a8cd-236c09d5adb3\") " pod="openstack/openstack-galera-0" Jan 22 10:41:49 crc kubenswrapper[4752]: I0122 10:41:49.482455 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f64ed2cf-3432-43c8-a8cd-236c09d5adb3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f64ed2cf-3432-43c8-a8cd-236c09d5adb3\") " pod="openstack/openstack-galera-0" Jan 22 10:41:49 crc kubenswrapper[4752]: I0122 10:41:49.482480 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f64ed2cf-3432-43c8-a8cd-236c09d5adb3-kolla-config\") pod \"openstack-galera-0\" (UID: \"f64ed2cf-3432-43c8-a8cd-236c09d5adb3\") " pod="openstack/openstack-galera-0" Jan 22 10:41:49 crc kubenswrapper[4752]: I0122 10:41:49.482498 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f64ed2cf-3432-43c8-a8cd-236c09d5adb3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f64ed2cf-3432-43c8-a8cd-236c09d5adb3\") " pod="openstack/openstack-galera-0" Jan 22 10:41:49 crc kubenswrapper[4752]: I0122 10:41:49.482526 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"f64ed2cf-3432-43c8-a8cd-236c09d5adb3\") " pod="openstack/openstack-galera-0" Jan 22 10:41:49 crc kubenswrapper[4752]: I0122 10:41:49.584993 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f64ed2cf-3432-43c8-a8cd-236c09d5adb3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f64ed2cf-3432-43c8-a8cd-236c09d5adb3\") " pod="openstack/openstack-galera-0" Jan 22 10:41:49 crc kubenswrapper[4752]: I0122 10:41:49.585053 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f64ed2cf-3432-43c8-a8cd-236c09d5adb3-kolla-config\") pod \"openstack-galera-0\" (UID: \"f64ed2cf-3432-43c8-a8cd-236c09d5adb3\") " pod="openstack/openstack-galera-0" Jan 22 10:41:49 crc kubenswrapper[4752]: I0122 10:41:49.585072 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f64ed2cf-3432-43c8-a8cd-236c09d5adb3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f64ed2cf-3432-43c8-a8cd-236c09d5adb3\") " pod="openstack/openstack-galera-0" Jan 22 10:41:49 crc kubenswrapper[4752]: I0122 10:41:49.585109 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"f64ed2cf-3432-43c8-a8cd-236c09d5adb3\") " pod="openstack/openstack-galera-0" Jan 22 10:41:49 crc kubenswrapper[4752]: I0122 10:41:49.585174 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f64ed2cf-3432-43c8-a8cd-236c09d5adb3-config-data-default\") pod \"openstack-galera-0\" (UID: \"f64ed2cf-3432-43c8-a8cd-236c09d5adb3\") " pod="openstack/openstack-galera-0" Jan 22 10:41:49 crc kubenswrapper[4752]: I0122 10:41:49.585201 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f64ed2cf-3432-43c8-a8cd-236c09d5adb3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f64ed2cf-3432-43c8-a8cd-236c09d5adb3\") " pod="openstack/openstack-galera-0" Jan 22 10:41:49 crc kubenswrapper[4752]: I0122 10:41:49.585242 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrjmz\" (UniqueName: \"kubernetes.io/projected/f64ed2cf-3432-43c8-a8cd-236c09d5adb3-kube-api-access-mrjmz\") pod \"openstack-galera-0\" (UID: \"f64ed2cf-3432-43c8-a8cd-236c09d5adb3\") " pod="openstack/openstack-galera-0" Jan 22 10:41:49 crc kubenswrapper[4752]: I0122 10:41:49.585304 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f64ed2cf-3432-43c8-a8cd-236c09d5adb3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f64ed2cf-3432-43c8-a8cd-236c09d5adb3\") " pod="openstack/openstack-galera-0" Jan 22 10:41:49 crc kubenswrapper[4752]: I0122 10:41:49.585323 4752 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"f64ed2cf-3432-43c8-a8cd-236c09d5adb3\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-galera-0" Jan 22 10:41:49 crc kubenswrapper[4752]: I0122 10:41:49.585714 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f64ed2cf-3432-43c8-a8cd-236c09d5adb3-kolla-config\") pod \"openstack-galera-0\" (UID: \"f64ed2cf-3432-43c8-a8cd-236c09d5adb3\") " pod="openstack/openstack-galera-0" Jan 22 10:41:49 crc kubenswrapper[4752]: I0122 10:41:49.586214 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f64ed2cf-3432-43c8-a8cd-236c09d5adb3-config-data-default\") pod \"openstack-galera-0\" (UID: \"f64ed2cf-3432-43c8-a8cd-236c09d5adb3\") " pod="openstack/openstack-galera-0" Jan 22 10:41:49 crc kubenswrapper[4752]: I0122 10:41:49.587027 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f64ed2cf-3432-43c8-a8cd-236c09d5adb3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f64ed2cf-3432-43c8-a8cd-236c09d5adb3\") " pod="openstack/openstack-galera-0" Jan 22 10:41:49 crc kubenswrapper[4752]: I0122 10:41:49.588442 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f64ed2cf-3432-43c8-a8cd-236c09d5adb3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f64ed2cf-3432-43c8-a8cd-236c09d5adb3\") " pod="openstack/openstack-galera-0" Jan 22 10:41:49 crc kubenswrapper[4752]: I0122 10:41:49.592275 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f64ed2cf-3432-43c8-a8cd-236c09d5adb3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f64ed2cf-3432-43c8-a8cd-236c09d5adb3\") " pod="openstack/openstack-galera-0" Jan 22 10:41:49 crc kubenswrapper[4752]: I0122 10:41:49.606801 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrjmz\" (UniqueName: \"kubernetes.io/projected/f64ed2cf-3432-43c8-a8cd-236c09d5adb3-kube-api-access-mrjmz\") pod \"openstack-galera-0\" (UID: \"f64ed2cf-3432-43c8-a8cd-236c09d5adb3\") " pod="openstack/openstack-galera-0" Jan 22 10:41:49 crc kubenswrapper[4752]: I0122 10:41:49.608315 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"f64ed2cf-3432-43c8-a8cd-236c09d5adb3\") " pod="openstack/openstack-galera-0" Jan 22 10:41:49 crc kubenswrapper[4752]: I0122 10:41:49.609409 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f64ed2cf-3432-43c8-a8cd-236c09d5adb3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f64ed2cf-3432-43c8-a8cd-236c09d5adb3\") " pod="openstack/openstack-galera-0" Jan 22 10:41:49 crc kubenswrapper[4752]: I0122 10:41:49.702681 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 22 10:41:50 crc kubenswrapper[4752]: I0122 10:41:50.795689 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 22 10:41:50 crc kubenswrapper[4752]: I0122 10:41:50.798993 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 22 10:41:50 crc kubenswrapper[4752]: I0122 10:41:50.802312 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 22 10:41:50 crc kubenswrapper[4752]: I0122 10:41:50.802773 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-6dknp" Jan 22 10:41:50 crc kubenswrapper[4752]: I0122 10:41:50.803400 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 22 10:41:50 crc kubenswrapper[4752]: I0122 10:41:50.803571 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 22 10:41:50 crc kubenswrapper[4752]: I0122 10:41:50.803460 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 22 10:41:50 crc kubenswrapper[4752]: I0122 10:41:50.912168 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/06081432-1ce8-4002-8522-6e3472acb753-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"06081432-1ce8-4002-8522-6e3472acb753\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:41:50 crc kubenswrapper[4752]: I0122 10:41:50.912209 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsxpv\" (UniqueName: \"kubernetes.io/projected/06081432-1ce8-4002-8522-6e3472acb753-kube-api-access-gsxpv\") pod \"openstack-cell1-galera-0\" (UID: \"06081432-1ce8-4002-8522-6e3472acb753\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:41:50 crc kubenswrapper[4752]: I0122 10:41:50.912251 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06081432-1ce8-4002-8522-6e3472acb753-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"06081432-1ce8-4002-8522-6e3472acb753\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:41:50 crc kubenswrapper[4752]: I0122 10:41:50.912279 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"06081432-1ce8-4002-8522-6e3472acb753\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:41:50 crc kubenswrapper[4752]: I0122 10:41:50.912339 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/06081432-1ce8-4002-8522-6e3472acb753-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"06081432-1ce8-4002-8522-6e3472acb753\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:41:50 crc kubenswrapper[4752]: I0122 10:41:50.912359 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06081432-1ce8-4002-8522-6e3472acb753-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"06081432-1ce8-4002-8522-6e3472acb753\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:41:50 crc kubenswrapper[4752]: I0122 10:41:50.912495 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/06081432-1ce8-4002-8522-6e3472acb753-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"06081432-1ce8-4002-8522-6e3472acb753\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:41:50 crc kubenswrapper[4752]: I0122 10:41:50.912592 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/06081432-1ce8-4002-8522-6e3472acb753-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"06081432-1ce8-4002-8522-6e3472acb753\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.006026 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.006917 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.011231 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.011416 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.011694 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-d5ctt" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.014634 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"06081432-1ce8-4002-8522-6e3472acb753\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.014875 4752 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"06081432-1ce8-4002-8522-6e3472acb753\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/openstack-cell1-galera-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.014999 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/06081432-1ce8-4002-8522-6e3472acb753-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"06081432-1ce8-4002-8522-6e3472acb753\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.015028 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06081432-1ce8-4002-8522-6e3472acb753-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"06081432-1ce8-4002-8522-6e3472acb753\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.015057 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/06081432-1ce8-4002-8522-6e3472acb753-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"06081432-1ce8-4002-8522-6e3472acb753\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.015084 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/06081432-1ce8-4002-8522-6e3472acb753-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"06081432-1ce8-4002-8522-6e3472acb753\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.015135 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/06081432-1ce8-4002-8522-6e3472acb753-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"06081432-1ce8-4002-8522-6e3472acb753\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.015154 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsxpv\" (UniqueName: \"kubernetes.io/projected/06081432-1ce8-4002-8522-6e3472acb753-kube-api-access-gsxpv\") pod \"openstack-cell1-galera-0\" (UID: \"06081432-1ce8-4002-8522-6e3472acb753\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.015184 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06081432-1ce8-4002-8522-6e3472acb753-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"06081432-1ce8-4002-8522-6e3472acb753\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.016403 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/06081432-1ce8-4002-8522-6e3472acb753-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"06081432-1ce8-4002-8522-6e3472acb753\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.016489 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/06081432-1ce8-4002-8522-6e3472acb753-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"06081432-1ce8-4002-8522-6e3472acb753\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.017026 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/06081432-1ce8-4002-8522-6e3472acb753-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"06081432-1ce8-4002-8522-6e3472acb753\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.017055 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06081432-1ce8-4002-8522-6e3472acb753-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"06081432-1ce8-4002-8522-6e3472acb753\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.021783 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06081432-1ce8-4002-8522-6e3472acb753-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"06081432-1ce8-4002-8522-6e3472acb753\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.032647 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.035373 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/06081432-1ce8-4002-8522-6e3472acb753-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"06081432-1ce8-4002-8522-6e3472acb753\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.044255 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsxpv\" (UniqueName: \"kubernetes.io/projected/06081432-1ce8-4002-8522-6e3472acb753-kube-api-access-gsxpv\") pod \"openstack-cell1-galera-0\" (UID: \"06081432-1ce8-4002-8522-6e3472acb753\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.088988 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"06081432-1ce8-4002-8522-6e3472acb753\") " pod="openstack/openstack-cell1-galera-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.116296 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bcf8869-ab22-4792-bb7a-72fc6887d091-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3bcf8869-ab22-4792-bb7a-72fc6887d091\") " pod="openstack/memcached-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.116360 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp8tf\" (UniqueName: \"kubernetes.io/projected/3bcf8869-ab22-4792-bb7a-72fc6887d091-kube-api-access-sp8tf\") pod \"memcached-0\" (UID: \"3bcf8869-ab22-4792-bb7a-72fc6887d091\") " pod="openstack/memcached-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.116387 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3bcf8869-ab22-4792-bb7a-72fc6887d091-kolla-config\") pod \"memcached-0\" (UID: \"3bcf8869-ab22-4792-bb7a-72fc6887d091\") " pod="openstack/memcached-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.116445 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3bcf8869-ab22-4792-bb7a-72fc6887d091-config-data\") pod \"memcached-0\" (UID: \"3bcf8869-ab22-4792-bb7a-72fc6887d091\") " pod="openstack/memcached-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.116494 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bcf8869-ab22-4792-bb7a-72fc6887d091-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3bcf8869-ab22-4792-bb7a-72fc6887d091\") " pod="openstack/memcached-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.130184 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.217829 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3bcf8869-ab22-4792-bb7a-72fc6887d091-config-data\") pod \"memcached-0\" (UID: \"3bcf8869-ab22-4792-bb7a-72fc6887d091\") " pod="openstack/memcached-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.218144 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bcf8869-ab22-4792-bb7a-72fc6887d091-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3bcf8869-ab22-4792-bb7a-72fc6887d091\") " pod="openstack/memcached-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.218209 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bcf8869-ab22-4792-bb7a-72fc6887d091-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3bcf8869-ab22-4792-bb7a-72fc6887d091\") " pod="openstack/memcached-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.218251 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp8tf\" (UniqueName: \"kubernetes.io/projected/3bcf8869-ab22-4792-bb7a-72fc6887d091-kube-api-access-sp8tf\") pod \"memcached-0\" (UID: \"3bcf8869-ab22-4792-bb7a-72fc6887d091\") " pod="openstack/memcached-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.218284 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3bcf8869-ab22-4792-bb7a-72fc6887d091-kolla-config\") pod \"memcached-0\" (UID: \"3bcf8869-ab22-4792-bb7a-72fc6887d091\") " pod="openstack/memcached-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.219021 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3bcf8869-ab22-4792-bb7a-72fc6887d091-kolla-config\") pod \"memcached-0\" (UID: \"3bcf8869-ab22-4792-bb7a-72fc6887d091\") " pod="openstack/memcached-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.219832 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3bcf8869-ab22-4792-bb7a-72fc6887d091-config-data\") pod \"memcached-0\" (UID: \"3bcf8869-ab22-4792-bb7a-72fc6887d091\") " pod="openstack/memcached-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.221481 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bcf8869-ab22-4792-bb7a-72fc6887d091-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3bcf8869-ab22-4792-bb7a-72fc6887d091\") " pod="openstack/memcached-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.222212 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bcf8869-ab22-4792-bb7a-72fc6887d091-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3bcf8869-ab22-4792-bb7a-72fc6887d091\") " pod="openstack/memcached-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.233237 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp8tf\" (UniqueName: \"kubernetes.io/projected/3bcf8869-ab22-4792-bb7a-72fc6887d091-kube-api-access-sp8tf\") pod \"memcached-0\" (UID: \"3bcf8869-ab22-4792-bb7a-72fc6887d091\") " pod="openstack/memcached-0" Jan 22 10:41:51 crc kubenswrapper[4752]: I0122 10:41:51.323163 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 22 10:41:52 crc kubenswrapper[4752]: I0122 10:41:52.987202 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 10:41:52 crc kubenswrapper[4752]: I0122 10:41:52.988674 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 10:41:52 crc kubenswrapper[4752]: I0122 10:41:52.992639 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-jd7dq" Jan 22 10:41:52 crc kubenswrapper[4752]: I0122 10:41:52.994774 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 10:41:53 crc kubenswrapper[4752]: I0122 10:41:53.150366 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p7sw\" (UniqueName: \"kubernetes.io/projected/37c7ae68-7fae-44ab-bb3a-f838bdc15bea-kube-api-access-7p7sw\") pod \"kube-state-metrics-0\" (UID: \"37c7ae68-7fae-44ab-bb3a-f838bdc15bea\") " pod="openstack/kube-state-metrics-0" Jan 22 10:41:53 crc kubenswrapper[4752]: I0122 10:41:53.251235 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p7sw\" (UniqueName: \"kubernetes.io/projected/37c7ae68-7fae-44ab-bb3a-f838bdc15bea-kube-api-access-7p7sw\") pod \"kube-state-metrics-0\" (UID: \"37c7ae68-7fae-44ab-bb3a-f838bdc15bea\") " pod="openstack/kube-state-metrics-0" Jan 22 10:41:53 crc kubenswrapper[4752]: I0122 10:41:53.318720 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p7sw\" (UniqueName: \"kubernetes.io/projected/37c7ae68-7fae-44ab-bb3a-f838bdc15bea-kube-api-access-7p7sw\") pod \"kube-state-metrics-0\" (UID: \"37c7ae68-7fae-44ab-bb3a-f838bdc15bea\") " pod="openstack/kube-state-metrics-0" Jan 22 10:41:53 crc kubenswrapper[4752]: I0122 10:41:53.619832 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.462581 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.465252 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.467240 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.467348 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.467485 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.467560 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.467967 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-n6xwj" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.467997 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.468530 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.474130 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.479683 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.575182 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9750781f-e5d3-4106-ac9e-431b017df583-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.575245 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9750781f-e5d3-4106-ac9e-431b017df583-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.575421 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9750781f-e5d3-4106-ac9e-431b017df583-config\") pod \"prometheus-metric-storage-0\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.575573 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9750781f-e5d3-4106-ac9e-431b017df583-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.575680 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d\") pod \"prometheus-metric-storage-0\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.575764 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9750781f-e5d3-4106-ac9e-431b017df583-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.575795 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9750781f-e5d3-4106-ac9e-431b017df583-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.575821 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9750781f-e5d3-4106-ac9e-431b017df583-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.576013 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9750781f-e5d3-4106-ac9e-431b017df583-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.576045 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c85mq\" (UniqueName: \"kubernetes.io/projected/9750781f-e5d3-4106-ac9e-431b017df583-kube-api-access-c85mq\") pod \"prometheus-metric-storage-0\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.677434 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9750781f-e5d3-4106-ac9e-431b017df583-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.677494 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9750781f-e5d3-4106-ac9e-431b017df583-config\") pod \"prometheus-metric-storage-0\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.677531 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9750781f-e5d3-4106-ac9e-431b017df583-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.677561 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d\") pod \"prometheus-metric-storage-0\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.677588 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9750781f-e5d3-4106-ac9e-431b017df583-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.677604 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9750781f-e5d3-4106-ac9e-431b017df583-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.677620 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9750781f-e5d3-4106-ac9e-431b017df583-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.677646 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9750781f-e5d3-4106-ac9e-431b017df583-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.677662 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c85mq\" (UniqueName: \"kubernetes.io/projected/9750781f-e5d3-4106-ac9e-431b017df583-kube-api-access-c85mq\") pod \"prometheus-metric-storage-0\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.677719 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9750781f-e5d3-4106-ac9e-431b017df583-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.680494 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9750781f-e5d3-4106-ac9e-431b017df583-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.680971 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9750781f-e5d3-4106-ac9e-431b017df583-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.682497 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9750781f-e5d3-4106-ac9e-431b017df583-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.682835 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9750781f-e5d3-4106-ac9e-431b017df583-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.683762 4752 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.683789 4752 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d\") pod \"prometheus-metric-storage-0\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d012c5afc253dfb5bb1585a9f32cbc4589affd7948918f5a8ea0a0a38ad6626e/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.686426 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9750781f-e5d3-4106-ac9e-431b017df583-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.690295 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9750781f-e5d3-4106-ac9e-431b017df583-config\") pod \"prometheus-metric-storage-0\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.691647 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9750781f-e5d3-4106-ac9e-431b017df583-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.697604 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c85mq\" (UniqueName: \"kubernetes.io/projected/9750781f-e5d3-4106-ac9e-431b017df583-kube-api-access-c85mq\") pod \"prometheus-metric-storage-0\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.714166 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9750781f-e5d3-4106-ac9e-431b017df583-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.720666 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d\") pod \"prometheus-metric-storage-0\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:41:54 crc kubenswrapper[4752]: I0122 10:41:54.785698 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.425069 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-vpt5c"] Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.426545 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vpt5c" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.428582 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.428734 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-n296j" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.428974 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.438750 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vpt5c"] Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.561073 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-z6f8t"] Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.562546 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-z6f8t" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.570836 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-z6f8t"] Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.627611 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wktcs\" (UniqueName: \"kubernetes.io/projected/7d696fd8-24f0-4e7a-801b-6376ea06f238-kube-api-access-wktcs\") pod \"ovn-controller-vpt5c\" (UID: \"7d696fd8-24f0-4e7a-801b-6376ea06f238\") " pod="openstack/ovn-controller-vpt5c" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.627674 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7d696fd8-24f0-4e7a-801b-6376ea06f238-var-run\") pod \"ovn-controller-vpt5c\" (UID: \"7d696fd8-24f0-4e7a-801b-6376ea06f238\") " pod="openstack/ovn-controller-vpt5c" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.627716 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7d696fd8-24f0-4e7a-801b-6376ea06f238-var-run-ovn\") pod \"ovn-controller-vpt5c\" (UID: \"7d696fd8-24f0-4e7a-801b-6376ea06f238\") " pod="openstack/ovn-controller-vpt5c" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.627744 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d696fd8-24f0-4e7a-801b-6376ea06f238-scripts\") pod \"ovn-controller-vpt5c\" (UID: \"7d696fd8-24f0-4e7a-801b-6376ea06f238\") " pod="openstack/ovn-controller-vpt5c" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.627764 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d696fd8-24f0-4e7a-801b-6376ea06f238-combined-ca-bundle\") pod \"ovn-controller-vpt5c\" (UID: \"7d696fd8-24f0-4e7a-801b-6376ea06f238\") " pod="openstack/ovn-controller-vpt5c" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.627782 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7d696fd8-24f0-4e7a-801b-6376ea06f238-var-log-ovn\") pod \"ovn-controller-vpt5c\" (UID: \"7d696fd8-24f0-4e7a-801b-6376ea06f238\") " pod="openstack/ovn-controller-vpt5c" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.627803 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d696fd8-24f0-4e7a-801b-6376ea06f238-ovn-controller-tls-certs\") pod \"ovn-controller-vpt5c\" (UID: \"7d696fd8-24f0-4e7a-801b-6376ea06f238\") " pod="openstack/ovn-controller-vpt5c" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.729593 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/12e43fc8-ff00-42df-9091-307d6dc3e7d5-scripts\") pod \"ovn-controller-ovs-z6f8t\" (UID: \"12e43fc8-ff00-42df-9091-307d6dc3e7d5\") " pod="openstack/ovn-controller-ovs-z6f8t" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.729643 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d696fd8-24f0-4e7a-801b-6376ea06f238-scripts\") pod \"ovn-controller-vpt5c\" (UID: \"7d696fd8-24f0-4e7a-801b-6376ea06f238\") " pod="openstack/ovn-controller-vpt5c" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.729686 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d696fd8-24f0-4e7a-801b-6376ea06f238-combined-ca-bundle\") pod \"ovn-controller-vpt5c\" (UID: \"7d696fd8-24f0-4e7a-801b-6376ea06f238\") " pod="openstack/ovn-controller-vpt5c" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.729704 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7d696fd8-24f0-4e7a-801b-6376ea06f238-var-log-ovn\") pod \"ovn-controller-vpt5c\" (UID: \"7d696fd8-24f0-4e7a-801b-6376ea06f238\") " pod="openstack/ovn-controller-vpt5c" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.729726 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d696fd8-24f0-4e7a-801b-6376ea06f238-ovn-controller-tls-certs\") pod \"ovn-controller-vpt5c\" (UID: \"7d696fd8-24f0-4e7a-801b-6376ea06f238\") " pod="openstack/ovn-controller-vpt5c" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.729758 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/12e43fc8-ff00-42df-9091-307d6dc3e7d5-var-run\") pod \"ovn-controller-ovs-z6f8t\" (UID: \"12e43fc8-ff00-42df-9091-307d6dc3e7d5\") " pod="openstack/ovn-controller-ovs-z6f8t" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.729804 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/12e43fc8-ff00-42df-9091-307d6dc3e7d5-var-lib\") pod \"ovn-controller-ovs-z6f8t\" (UID: \"12e43fc8-ff00-42df-9091-307d6dc3e7d5\") " pod="openstack/ovn-controller-ovs-z6f8t" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.729827 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/12e43fc8-ff00-42df-9091-307d6dc3e7d5-var-log\") pod \"ovn-controller-ovs-z6f8t\" (UID: \"12e43fc8-ff00-42df-9091-307d6dc3e7d5\") " pod="openstack/ovn-controller-ovs-z6f8t" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.729869 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wktcs\" (UniqueName: \"kubernetes.io/projected/7d696fd8-24f0-4e7a-801b-6376ea06f238-kube-api-access-wktcs\") pod \"ovn-controller-vpt5c\" (UID: \"7d696fd8-24f0-4e7a-801b-6376ea06f238\") " pod="openstack/ovn-controller-vpt5c" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.729896 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7d696fd8-24f0-4e7a-801b-6376ea06f238-var-run\") pod \"ovn-controller-vpt5c\" (UID: \"7d696fd8-24f0-4e7a-801b-6376ea06f238\") " pod="openstack/ovn-controller-vpt5c" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.729914 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/12e43fc8-ff00-42df-9091-307d6dc3e7d5-etc-ovs\") pod \"ovn-controller-ovs-z6f8t\" (UID: \"12e43fc8-ff00-42df-9091-307d6dc3e7d5\") " pod="openstack/ovn-controller-ovs-z6f8t" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.729935 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cckfp\" (UniqueName: \"kubernetes.io/projected/12e43fc8-ff00-42df-9091-307d6dc3e7d5-kube-api-access-cckfp\") pod \"ovn-controller-ovs-z6f8t\" (UID: \"12e43fc8-ff00-42df-9091-307d6dc3e7d5\") " pod="openstack/ovn-controller-ovs-z6f8t" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.729962 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7d696fd8-24f0-4e7a-801b-6376ea06f238-var-run-ovn\") pod \"ovn-controller-vpt5c\" (UID: \"7d696fd8-24f0-4e7a-801b-6376ea06f238\") " pod="openstack/ovn-controller-vpt5c" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.730338 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7d696fd8-24f0-4e7a-801b-6376ea06f238-var-run-ovn\") pod \"ovn-controller-vpt5c\" (UID: \"7d696fd8-24f0-4e7a-801b-6376ea06f238\") " pod="openstack/ovn-controller-vpt5c" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.730382 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7d696fd8-24f0-4e7a-801b-6376ea06f238-var-log-ovn\") pod \"ovn-controller-vpt5c\" (UID: \"7d696fd8-24f0-4e7a-801b-6376ea06f238\") " pod="openstack/ovn-controller-vpt5c" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.730669 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7d696fd8-24f0-4e7a-801b-6376ea06f238-var-run\") pod \"ovn-controller-vpt5c\" (UID: \"7d696fd8-24f0-4e7a-801b-6376ea06f238\") " pod="openstack/ovn-controller-vpt5c" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.743452 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d696fd8-24f0-4e7a-801b-6376ea06f238-scripts\") pod \"ovn-controller-vpt5c\" (UID: \"7d696fd8-24f0-4e7a-801b-6376ea06f238\") " pod="openstack/ovn-controller-vpt5c" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.801482 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d696fd8-24f0-4e7a-801b-6376ea06f238-ovn-controller-tls-certs\") pod \"ovn-controller-vpt5c\" (UID: \"7d696fd8-24f0-4e7a-801b-6376ea06f238\") " pod="openstack/ovn-controller-vpt5c" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.803973 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d696fd8-24f0-4e7a-801b-6376ea06f238-combined-ca-bundle\") pod \"ovn-controller-vpt5c\" (UID: \"7d696fd8-24f0-4e7a-801b-6376ea06f238\") " pod="openstack/ovn-controller-vpt5c" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.806569 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wktcs\" (UniqueName: \"kubernetes.io/projected/7d696fd8-24f0-4e7a-801b-6376ea06f238-kube-api-access-wktcs\") pod \"ovn-controller-vpt5c\" (UID: \"7d696fd8-24f0-4e7a-801b-6376ea06f238\") " pod="openstack/ovn-controller-vpt5c" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.808253 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vpt5c" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.831239 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/12e43fc8-ff00-42df-9091-307d6dc3e7d5-var-run\") pod \"ovn-controller-ovs-z6f8t\" (UID: \"12e43fc8-ff00-42df-9091-307d6dc3e7d5\") " pod="openstack/ovn-controller-ovs-z6f8t" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.831318 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/12e43fc8-ff00-42df-9091-307d6dc3e7d5-var-lib\") pod \"ovn-controller-ovs-z6f8t\" (UID: \"12e43fc8-ff00-42df-9091-307d6dc3e7d5\") " pod="openstack/ovn-controller-ovs-z6f8t" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.831349 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/12e43fc8-ff00-42df-9091-307d6dc3e7d5-var-log\") pod \"ovn-controller-ovs-z6f8t\" (UID: \"12e43fc8-ff00-42df-9091-307d6dc3e7d5\") " pod="openstack/ovn-controller-ovs-z6f8t" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.831399 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/12e43fc8-ff00-42df-9091-307d6dc3e7d5-etc-ovs\") pod \"ovn-controller-ovs-z6f8t\" (UID: \"12e43fc8-ff00-42df-9091-307d6dc3e7d5\") " pod="openstack/ovn-controller-ovs-z6f8t" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.831423 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cckfp\" (UniqueName: \"kubernetes.io/projected/12e43fc8-ff00-42df-9091-307d6dc3e7d5-kube-api-access-cckfp\") pod \"ovn-controller-ovs-z6f8t\" (UID: \"12e43fc8-ff00-42df-9091-307d6dc3e7d5\") " pod="openstack/ovn-controller-ovs-z6f8t" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.831457 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/12e43fc8-ff00-42df-9091-307d6dc3e7d5-scripts\") pod \"ovn-controller-ovs-z6f8t\" (UID: \"12e43fc8-ff00-42df-9091-307d6dc3e7d5\") " pod="openstack/ovn-controller-ovs-z6f8t" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.831517 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/12e43fc8-ff00-42df-9091-307d6dc3e7d5-var-run\") pod \"ovn-controller-ovs-z6f8t\" (UID: \"12e43fc8-ff00-42df-9091-307d6dc3e7d5\") " pod="openstack/ovn-controller-ovs-z6f8t" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.831663 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/12e43fc8-ff00-42df-9091-307d6dc3e7d5-var-lib\") pod \"ovn-controller-ovs-z6f8t\" (UID: \"12e43fc8-ff00-42df-9091-307d6dc3e7d5\") " pod="openstack/ovn-controller-ovs-z6f8t" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.832295 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/12e43fc8-ff00-42df-9091-307d6dc3e7d5-etc-ovs\") pod \"ovn-controller-ovs-z6f8t\" (UID: \"12e43fc8-ff00-42df-9091-307d6dc3e7d5\") " pod="openstack/ovn-controller-ovs-z6f8t" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.832382 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/12e43fc8-ff00-42df-9091-307d6dc3e7d5-var-log\") pod \"ovn-controller-ovs-z6f8t\" (UID: \"12e43fc8-ff00-42df-9091-307d6dc3e7d5\") " pod="openstack/ovn-controller-ovs-z6f8t" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.833539 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/12e43fc8-ff00-42df-9091-307d6dc3e7d5-scripts\") pod \"ovn-controller-ovs-z6f8t\" (UID: \"12e43fc8-ff00-42df-9091-307d6dc3e7d5\") " pod="openstack/ovn-controller-ovs-z6f8t" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.849572 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cckfp\" (UniqueName: \"kubernetes.io/projected/12e43fc8-ff00-42df-9091-307d6dc3e7d5-kube-api-access-cckfp\") pod \"ovn-controller-ovs-z6f8t\" (UID: \"12e43fc8-ff00-42df-9091-307d6dc3e7d5\") " pod="openstack/ovn-controller-ovs-z6f8t" Jan 22 10:41:56 crc kubenswrapper[4752]: I0122 10:41:56.891901 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-z6f8t" Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.298664 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.300253 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.303708 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-flcz6" Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.303936 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.304114 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.304298 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.306347 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.309364 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.471728 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3321ba2b-722c-42c7-aa42-0325165b9437-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3321ba2b-722c-42c7-aa42-0325165b9437\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.471808 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3321ba2b-722c-42c7-aa42-0325165b9437-config\") pod \"ovsdbserver-nb-0\" (UID: \"3321ba2b-722c-42c7-aa42-0325165b9437\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.471875 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwrjb\" (UniqueName: \"kubernetes.io/projected/3321ba2b-722c-42c7-aa42-0325165b9437-kube-api-access-kwrjb\") pod \"ovsdbserver-nb-0\" (UID: \"3321ba2b-722c-42c7-aa42-0325165b9437\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.471911 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3321ba2b-722c-42c7-aa42-0325165b9437-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3321ba2b-722c-42c7-aa42-0325165b9437\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.471964 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3321ba2b-722c-42c7-aa42-0325165b9437\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.472046 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3321ba2b-722c-42c7-aa42-0325165b9437-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3321ba2b-722c-42c7-aa42-0325165b9437\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.472071 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3321ba2b-722c-42c7-aa42-0325165b9437-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3321ba2b-722c-42c7-aa42-0325165b9437\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.472104 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3321ba2b-722c-42c7-aa42-0325165b9437-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3321ba2b-722c-42c7-aa42-0325165b9437\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.573431 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3321ba2b-722c-42c7-aa42-0325165b9437-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3321ba2b-722c-42c7-aa42-0325165b9437\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.573496 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3321ba2b-722c-42c7-aa42-0325165b9437-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3321ba2b-722c-42c7-aa42-0325165b9437\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.573539 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3321ba2b-722c-42c7-aa42-0325165b9437-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3321ba2b-722c-42c7-aa42-0325165b9437\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.573752 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3321ba2b-722c-42c7-aa42-0325165b9437-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3321ba2b-722c-42c7-aa42-0325165b9437\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.573799 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3321ba2b-722c-42c7-aa42-0325165b9437-config\") pod \"ovsdbserver-nb-0\" (UID: \"3321ba2b-722c-42c7-aa42-0325165b9437\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.573848 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwrjb\" (UniqueName: \"kubernetes.io/projected/3321ba2b-722c-42c7-aa42-0325165b9437-kube-api-access-kwrjb\") pod \"ovsdbserver-nb-0\" (UID: \"3321ba2b-722c-42c7-aa42-0325165b9437\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.573905 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3321ba2b-722c-42c7-aa42-0325165b9437-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3321ba2b-722c-42c7-aa42-0325165b9437\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.573942 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3321ba2b-722c-42c7-aa42-0325165b9437\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.574142 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3321ba2b-722c-42c7-aa42-0325165b9437-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3321ba2b-722c-42c7-aa42-0325165b9437\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.574411 4752 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3321ba2b-722c-42c7-aa42-0325165b9437\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-nb-0" Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.574965 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3321ba2b-722c-42c7-aa42-0325165b9437-config\") pod \"ovsdbserver-nb-0\" (UID: \"3321ba2b-722c-42c7-aa42-0325165b9437\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.575440 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3321ba2b-722c-42c7-aa42-0325165b9437-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3321ba2b-722c-42c7-aa42-0325165b9437\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.601594 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3321ba2b-722c-42c7-aa42-0325165b9437\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.625180 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3321ba2b-722c-42c7-aa42-0325165b9437-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3321ba2b-722c-42c7-aa42-0325165b9437\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.626722 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3321ba2b-722c-42c7-aa42-0325165b9437-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3321ba2b-722c-42c7-aa42-0325165b9437\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.626896 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3321ba2b-722c-42c7-aa42-0325165b9437-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3321ba2b-722c-42c7-aa42-0325165b9437\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.627542 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwrjb\" (UniqueName: \"kubernetes.io/projected/3321ba2b-722c-42c7-aa42-0325165b9437-kube-api-access-kwrjb\") pod \"ovsdbserver-nb-0\" (UID: \"3321ba2b-722c-42c7-aa42-0325165b9437\") " pod="openstack/ovsdbserver-nb-0" Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.696701 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.723927 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:41:57 crc kubenswrapper[4752]: I0122 10:41:57.723983 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:42:00 crc kubenswrapper[4752]: I0122 10:42:00.858602 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 22 10:42:00 crc kubenswrapper[4752]: I0122 10:42:00.877433 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 22 10:42:00 crc kubenswrapper[4752]: I0122 10:42:00.879706 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-r4rqt" Jan 22 10:42:00 crc kubenswrapper[4752]: I0122 10:42:00.879919 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 22 10:42:00 crc kubenswrapper[4752]: I0122 10:42:00.880355 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 22 10:42:00 crc kubenswrapper[4752]: I0122 10:42:00.881800 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 22 10:42:00 crc kubenswrapper[4752]: I0122 10:42:00.892194 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 22 10:42:01 crc kubenswrapper[4752]: I0122 10:42:01.036142 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0f2558fc-5258-4a37-b46c-59ebc33503c6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0f2558fc-5258-4a37-b46c-59ebc33503c6\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:42:01 crc kubenswrapper[4752]: I0122 10:42:01.036468 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f2558fc-5258-4a37-b46c-59ebc33503c6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0f2558fc-5258-4a37-b46c-59ebc33503c6\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:42:01 crc kubenswrapper[4752]: I0122 10:42:01.036521 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0f2558fc-5258-4a37-b46c-59ebc33503c6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0f2558fc-5258-4a37-b46c-59ebc33503c6\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:42:01 crc kubenswrapper[4752]: I0122 10:42:01.036546 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mknt8\" (UniqueName: \"kubernetes.io/projected/0f2558fc-5258-4a37-b46c-59ebc33503c6-kube-api-access-mknt8\") pod \"ovsdbserver-sb-0\" (UID: \"0f2558fc-5258-4a37-b46c-59ebc33503c6\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:42:01 crc kubenswrapper[4752]: I0122 10:42:01.036580 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0f2558fc-5258-4a37-b46c-59ebc33503c6\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:42:01 crc kubenswrapper[4752]: I0122 10:42:01.037195 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f2558fc-5258-4a37-b46c-59ebc33503c6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0f2558fc-5258-4a37-b46c-59ebc33503c6\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:42:01 crc kubenswrapper[4752]: I0122 10:42:01.037278 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f2558fc-5258-4a37-b46c-59ebc33503c6-config\") pod \"ovsdbserver-sb-0\" (UID: \"0f2558fc-5258-4a37-b46c-59ebc33503c6\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:42:01 crc kubenswrapper[4752]: I0122 10:42:01.037330 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f2558fc-5258-4a37-b46c-59ebc33503c6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0f2558fc-5258-4a37-b46c-59ebc33503c6\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:42:01 crc kubenswrapper[4752]: I0122 10:42:01.138698 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0f2558fc-5258-4a37-b46c-59ebc33503c6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0f2558fc-5258-4a37-b46c-59ebc33503c6\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:42:01 crc kubenswrapper[4752]: I0122 10:42:01.138745 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f2558fc-5258-4a37-b46c-59ebc33503c6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0f2558fc-5258-4a37-b46c-59ebc33503c6\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:42:01 crc kubenswrapper[4752]: I0122 10:42:01.138783 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0f2558fc-5258-4a37-b46c-59ebc33503c6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0f2558fc-5258-4a37-b46c-59ebc33503c6\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:42:01 crc kubenswrapper[4752]: I0122 10:42:01.138808 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mknt8\" (UniqueName: \"kubernetes.io/projected/0f2558fc-5258-4a37-b46c-59ebc33503c6-kube-api-access-mknt8\") pod \"ovsdbserver-sb-0\" (UID: \"0f2558fc-5258-4a37-b46c-59ebc33503c6\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:42:01 crc kubenswrapper[4752]: I0122 10:42:01.138836 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0f2558fc-5258-4a37-b46c-59ebc33503c6\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:42:01 crc kubenswrapper[4752]: I0122 10:42:01.138932 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f2558fc-5258-4a37-b46c-59ebc33503c6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0f2558fc-5258-4a37-b46c-59ebc33503c6\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:42:01 crc kubenswrapper[4752]: I0122 10:42:01.138963 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f2558fc-5258-4a37-b46c-59ebc33503c6-config\") pod \"ovsdbserver-sb-0\" (UID: \"0f2558fc-5258-4a37-b46c-59ebc33503c6\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:42:01 crc kubenswrapper[4752]: I0122 10:42:01.139001 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f2558fc-5258-4a37-b46c-59ebc33503c6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0f2558fc-5258-4a37-b46c-59ebc33503c6\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:42:01 crc kubenswrapper[4752]: I0122 10:42:01.139914 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0f2558fc-5258-4a37-b46c-59ebc33503c6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0f2558fc-5258-4a37-b46c-59ebc33503c6\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:42:01 crc kubenswrapper[4752]: I0122 10:42:01.140884 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0f2558fc-5258-4a37-b46c-59ebc33503c6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0f2558fc-5258-4a37-b46c-59ebc33503c6\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:42:01 crc kubenswrapper[4752]: I0122 10:42:01.140970 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f2558fc-5258-4a37-b46c-59ebc33503c6-config\") pod \"ovsdbserver-sb-0\" (UID: \"0f2558fc-5258-4a37-b46c-59ebc33503c6\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:42:01 crc kubenswrapper[4752]: I0122 10:42:01.141216 4752 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0f2558fc-5258-4a37-b46c-59ebc33503c6\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-sb-0" Jan 22 10:42:01 crc kubenswrapper[4752]: I0122 10:42:01.146001 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f2558fc-5258-4a37-b46c-59ebc33503c6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0f2558fc-5258-4a37-b46c-59ebc33503c6\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:42:01 crc kubenswrapper[4752]: I0122 10:42:01.146705 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f2558fc-5258-4a37-b46c-59ebc33503c6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0f2558fc-5258-4a37-b46c-59ebc33503c6\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:42:01 crc kubenswrapper[4752]: I0122 10:42:01.156670 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mknt8\" (UniqueName: \"kubernetes.io/projected/0f2558fc-5258-4a37-b46c-59ebc33503c6-kube-api-access-mknt8\") pod \"ovsdbserver-sb-0\" (UID: \"0f2558fc-5258-4a37-b46c-59ebc33503c6\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:42:01 crc kubenswrapper[4752]: I0122 10:42:01.160106 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f2558fc-5258-4a37-b46c-59ebc33503c6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0f2558fc-5258-4a37-b46c-59ebc33503c6\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:42:01 crc kubenswrapper[4752]: I0122 10:42:01.166091 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0f2558fc-5258-4a37-b46c-59ebc33503c6\") " pod="openstack/ovsdbserver-sb-0" Jan 22 10:42:01 crc kubenswrapper[4752]: I0122 10:42:01.207378 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 22 10:42:03 crc kubenswrapper[4752]: I0122 10:42:03.336250 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 22 10:42:03 crc kubenswrapper[4752]: E0122 10:42:03.706879 4752 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.32:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 22 10:42:03 crc kubenswrapper[4752]: E0122 10:42:03.706944 4752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.32:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 22 10:42:03 crc kubenswrapper[4752]: E0122 10:42:03.707130 4752 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.32:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7x2rl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-548dc7894c-bwj9k_openstack(897f2843-ed58-4f28-9ab1-f8f9e6e50541): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 10:42:03 crc kubenswrapper[4752]: E0122 10:42:03.708314 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-548dc7894c-bwj9k" podUID="897f2843-ed58-4f28-9ab1-f8f9e6e50541" Jan 22 10:42:03 crc kubenswrapper[4752]: E0122 10:42:03.741109 4752 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.32:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 22 10:42:03 crc kubenswrapper[4752]: E0122 10:42:03.741251 4752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.32:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 22 10:42:03 crc kubenswrapper[4752]: E0122 10:42:03.741488 4752 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.32:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w8bp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-77b567bfc7-xnx87_openstack(9db0cbc1-d370-40dd-90e5-e419c1e5ad38): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 10:42:03 crc kubenswrapper[4752]: E0122 10:42:03.742931 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-77b567bfc7-xnx87" podUID="9db0cbc1-d370-40dd-90e5-e419c1e5ad38" Jan 22 10:42:04 crc kubenswrapper[4752]: I0122 10:42:04.095521 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3bcf8869-ab22-4792-bb7a-72fc6887d091","Type":"ContainerStarted","Data":"09b23e6baeb99e14b844939223de4a2a23b1597f14c82ba0bf7459a310cdea63"} Jan 22 10:42:04 crc kubenswrapper[4752]: I0122 10:42:04.569226 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-594b65fc49-fjdpq"] Jan 22 10:42:04 crc kubenswrapper[4752]: I0122 10:42:04.588052 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 22 10:42:04 crc kubenswrapper[4752]: I0122 10:42:04.595426 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 10:42:04 crc kubenswrapper[4752]: I0122 10:42:04.751960 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-z6f8t"] Jan 22 10:42:04 crc kubenswrapper[4752]: I0122 10:42:04.783352 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 10:42:04 crc kubenswrapper[4752]: I0122 10:42:04.790293 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vpt5c"] Jan 22 10:42:04 crc kubenswrapper[4752]: I0122 10:42:04.796056 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7786d8fd7-9wgbn"] Jan 22 10:42:04 crc kubenswrapper[4752]: I0122 10:42:04.802335 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76c44b4bf7-jmmzr"] Jan 22 10:42:05 crc kubenswrapper[4752]: I0122 10:42:05.087164 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 10:42:05 crc kubenswrapper[4752]: I0122 10:42:05.116670 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 22 10:42:05 crc kubenswrapper[4752]: I0122 10:42:05.120418 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 22 10:42:05 crc kubenswrapper[4752]: I0122 10:42:05.131596 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 10:42:05 crc kubenswrapper[4752]: I0122 10:42:05.175125 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 22 10:42:05 crc kubenswrapper[4752]: W0122 10:42:05.248038 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86753ef2_9ef7_4cd4_b4bd_f1a2f74f2a52.slice/crio-50da313b5fe2d90ac284aafd0acb71bbfe925125d7399a5390444641f52d9ab0 WatchSource:0}: Error finding container 50da313b5fe2d90ac284aafd0acb71bbfe925125d7399a5390444641f52d9ab0: Status 404 returned error can't find the container with id 50da313b5fe2d90ac284aafd0acb71bbfe925125d7399a5390444641f52d9ab0 Jan 22 10:42:05 crc kubenswrapper[4752]: W0122 10:42:05.256764 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06081432_1ce8_4002_8522_6e3472acb753.slice/crio-256fe42478e1e15df93d529bf8d8d38c0de748496ee12ab032b5529da724d55b WatchSource:0}: Error finding container 256fe42478e1e15df93d529bf8d8d38c0de748496ee12ab032b5529da724d55b: Status 404 returned error can't find the container with id 256fe42478e1e15df93d529bf8d8d38c0de748496ee12ab032b5529da724d55b Jan 22 10:42:05 crc kubenswrapper[4752]: W0122 10:42:05.267228 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12e43fc8_ff00_42df_9091_307d6dc3e7d5.slice/crio-afc34d7ba2fc2ca7e19ab7f7eb907dec7ac671ec1a3392bfc89ed2e68b04222d WatchSource:0}: Error finding container afc34d7ba2fc2ca7e19ab7f7eb907dec7ac671ec1a3392bfc89ed2e68b04222d: Status 404 returned error can't find the container with id afc34d7ba2fc2ca7e19ab7f7eb907dec7ac671ec1a3392bfc89ed2e68b04222d Jan 22 10:42:05 crc kubenswrapper[4752]: W0122 10:42:05.270015 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode374cd7c_feeb_428f_8181_2d909a962448.slice/crio-58c342c62dc90574f775a9b7f5e25d7fbcf58d88576bdd502ce4fe4f7a024622 WatchSource:0}: Error finding container 58c342c62dc90574f775a9b7f5e25d7fbcf58d88576bdd502ce4fe4f7a024622: Status 404 returned error can't find the container with id 58c342c62dc90574f775a9b7f5e25d7fbcf58d88576bdd502ce4fe4f7a024622 Jan 22 10:42:05 crc kubenswrapper[4752]: W0122 10:42:05.273412 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76dee6bc_ab39_4f6c_bc31_6ef18020e5f3.slice/crio-5081f3511ff948bf101b381fe55676e7f9c62be76475b25c2ad5a83d09c62151 WatchSource:0}: Error finding container 5081f3511ff948bf101b381fe55676e7f9c62be76475b25c2ad5a83d09c62151: Status 404 returned error can't find the container with id 5081f3511ff948bf101b381fe55676e7f9c62be76475b25c2ad5a83d09c62151 Jan 22 10:42:05 crc kubenswrapper[4752]: W0122 10:42:05.278078 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f2558fc_5258_4a37_b46c_59ebc33503c6.slice/crio-9d6986c6c6d33a384ff4a5b7e972fce3aec54cf63d5cb2b8f08bfc0c9f55723c WatchSource:0}: Error finding container 9d6986c6c6d33a384ff4a5b7e972fce3aec54cf63d5cb2b8f08bfc0c9f55723c: Status 404 returned error can't find the container with id 9d6986c6c6d33a384ff4a5b7e972fce3aec54cf63d5cb2b8f08bfc0c9f55723c Jan 22 10:42:05 crc kubenswrapper[4752]: W0122 10:42:05.285843 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9356406a_3c6e_4af1_a8bb_92244286ba39.slice/crio-d2eef2287c5b84ff6974cfdc64a02d085dbabf948bb083197efb64c7d54d1538 WatchSource:0}: Error finding container d2eef2287c5b84ff6974cfdc64a02d085dbabf948bb083197efb64c7d54d1538: Status 404 returned error can't find the container with id d2eef2287c5b84ff6974cfdc64a02d085dbabf948bb083197efb64c7d54d1538 Jan 22 10:42:05 crc kubenswrapper[4752]: E0122 10:42:05.342976 4752 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-sb,Image:38.102.83.32:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5ffhdch578h66h55bh54dh566h9hb4h566h54fh68dh87h675h5b9h55fh559h579hf6h57chbh5d5hb6hbfh5c8h594h8fhch656hddh59h9cq,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-sb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mknt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(0f2558fc-5258-4a37-b46c-59ebc33503c6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 10:42:05 crc kubenswrapper[4752]: E0122 10:42:05.346045 4752 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.102.83.32:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s4lbl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(9356406a-3c6e-4af1-a8bb-92244286ba39): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 10:42:05 crc kubenswrapper[4752]: E0122 10:42:05.346258 4752 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.102.83.32:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mb4bm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-notifications-server-0_openstack(76dee6bc-ab39-4f6c-bc31-6ef18020e5f3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 10:42:05 crc kubenswrapper[4752]: E0122 10:42:05.347780 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack/rabbitmq-cell1-server-0" podUID="9356406a-3c6e-4af1-a8bb-92244286ba39" Jan 22 10:42:05 crc kubenswrapper[4752]: E0122 10:42:05.347780 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack/rabbitmq-notifications-server-0" podUID="76dee6bc-ab39-4f6c-bc31-6ef18020e5f3" Jan 22 10:42:05 crc kubenswrapper[4752]: E0122 10:42:05.352348 4752 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n5ffhdch578h66h55bh54dh566h9hb4h566h54fh68dh87h675h5b9h55fh559h579hf6h57chbh5d5hb6hbfh5c8h594h8fhch656hddh59h9cq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mknt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(0f2558fc-5258-4a37-b46c-59ebc33503c6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 10:42:05 crc kubenswrapper[4752]: E0122 10:42:05.354125 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-sb\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack/ovsdbserver-sb-0" podUID="0f2558fc-5258-4a37-b46c-59ebc33503c6" Jan 22 10:42:05 crc kubenswrapper[4752]: I0122 10:42:05.384927 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-548dc7894c-bwj9k" Jan 22 10:42:05 crc kubenswrapper[4752]: I0122 10:42:05.408163 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77b567bfc7-xnx87" Jan 22 10:42:05 crc kubenswrapper[4752]: I0122 10:42:05.532525 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9db0cbc1-d370-40dd-90e5-e419c1e5ad38-dns-svc\") pod \"9db0cbc1-d370-40dd-90e5-e419c1e5ad38\" (UID: \"9db0cbc1-d370-40dd-90e5-e419c1e5ad38\") " Jan 22 10:42:05 crc kubenswrapper[4752]: I0122 10:42:05.532798 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x2rl\" (UniqueName: \"kubernetes.io/projected/897f2843-ed58-4f28-9ab1-f8f9e6e50541-kube-api-access-7x2rl\") pod \"897f2843-ed58-4f28-9ab1-f8f9e6e50541\" (UID: \"897f2843-ed58-4f28-9ab1-f8f9e6e50541\") " Jan 22 10:42:05 crc kubenswrapper[4752]: I0122 10:42:05.532847 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8bp7\" (UniqueName: \"kubernetes.io/projected/9db0cbc1-d370-40dd-90e5-e419c1e5ad38-kube-api-access-w8bp7\") pod \"9db0cbc1-d370-40dd-90e5-e419c1e5ad38\" (UID: \"9db0cbc1-d370-40dd-90e5-e419c1e5ad38\") " Jan 22 10:42:05 crc kubenswrapper[4752]: I0122 10:42:05.532905 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/897f2843-ed58-4f28-9ab1-f8f9e6e50541-config\") pod \"897f2843-ed58-4f28-9ab1-f8f9e6e50541\" (UID: \"897f2843-ed58-4f28-9ab1-f8f9e6e50541\") " Jan 22 10:42:05 crc kubenswrapper[4752]: I0122 10:42:05.532931 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9db0cbc1-d370-40dd-90e5-e419c1e5ad38-config\") pod \"9db0cbc1-d370-40dd-90e5-e419c1e5ad38\" (UID: \"9db0cbc1-d370-40dd-90e5-e419c1e5ad38\") " Jan 22 10:42:05 crc kubenswrapper[4752]: I0122 10:42:05.533676 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9db0cbc1-d370-40dd-90e5-e419c1e5ad38-config" (OuterVolumeSpecName: "config") pod "9db0cbc1-d370-40dd-90e5-e419c1e5ad38" (UID: "9db0cbc1-d370-40dd-90e5-e419c1e5ad38"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:42:05 crc kubenswrapper[4752]: I0122 10:42:05.534033 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9db0cbc1-d370-40dd-90e5-e419c1e5ad38-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9db0cbc1-d370-40dd-90e5-e419c1e5ad38" (UID: "9db0cbc1-d370-40dd-90e5-e419c1e5ad38"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:42:05 crc kubenswrapper[4752]: I0122 10:42:05.535277 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/897f2843-ed58-4f28-9ab1-f8f9e6e50541-config" (OuterVolumeSpecName: "config") pod "897f2843-ed58-4f28-9ab1-f8f9e6e50541" (UID: "897f2843-ed58-4f28-9ab1-f8f9e6e50541"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:42:05 crc kubenswrapper[4752]: I0122 10:42:05.540362 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9db0cbc1-d370-40dd-90e5-e419c1e5ad38-kube-api-access-w8bp7" (OuterVolumeSpecName: "kube-api-access-w8bp7") pod "9db0cbc1-d370-40dd-90e5-e419c1e5ad38" (UID: "9db0cbc1-d370-40dd-90e5-e419c1e5ad38"). InnerVolumeSpecName "kube-api-access-w8bp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:42:05 crc kubenswrapper[4752]: I0122 10:42:05.540290 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/897f2843-ed58-4f28-9ab1-f8f9e6e50541-kube-api-access-7x2rl" (OuterVolumeSpecName: "kube-api-access-7x2rl") pod "897f2843-ed58-4f28-9ab1-f8f9e6e50541" (UID: "897f2843-ed58-4f28-9ab1-f8f9e6e50541"). InnerVolumeSpecName "kube-api-access-7x2rl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:42:05 crc kubenswrapper[4752]: I0122 10:42:05.634421 4752 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9db0cbc1-d370-40dd-90e5-e419c1e5ad38-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:05 crc kubenswrapper[4752]: I0122 10:42:05.634450 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7x2rl\" (UniqueName: \"kubernetes.io/projected/897f2843-ed58-4f28-9ab1-f8f9e6e50541-kube-api-access-7x2rl\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:05 crc kubenswrapper[4752]: I0122 10:42:05.634460 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8bp7\" (UniqueName: \"kubernetes.io/projected/9db0cbc1-d370-40dd-90e5-e419c1e5ad38-kube-api-access-w8bp7\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:05 crc kubenswrapper[4752]: I0122 10:42:05.634469 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/897f2843-ed58-4f28-9ab1-f8f9e6e50541-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:05 crc kubenswrapper[4752]: I0122 10:42:05.634477 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9db0cbc1-d370-40dd-90e5-e419c1e5ad38-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:05 crc kubenswrapper[4752]: I0122 10:42:05.924984 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 22 10:42:06 crc kubenswrapper[4752]: W0122 10:42:06.107517 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3321ba2b_722c_42c7_aa42_0325165b9437.slice/crio-d5887fb8dc7044469084707fff96d00b2dd4d31b8bbf000b41256ba0098148b5 WatchSource:0}: Error finding container d5887fb8dc7044469084707fff96d00b2dd4d31b8bbf000b41256ba0098148b5: Status 404 returned error can't find the container with id d5887fb8dc7044469084707fff96d00b2dd4d31b8bbf000b41256ba0098148b5 Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.134129 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52","Type":"ContainerStarted","Data":"50da313b5fe2d90ac284aafd0acb71bbfe925125d7399a5390444641f52d9ab0"} Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.140029 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9356406a-3c6e-4af1-a8bb-92244286ba39","Type":"ContainerStarted","Data":"d2eef2287c5b84ff6974cfdc64a02d085dbabf948bb083197efb64c7d54d1538"} Jan 22 10:42:06 crc kubenswrapper[4752]: E0122 10:42:06.142296 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.32:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="9356406a-3c6e-4af1-a8bb-92244286ba39" Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.144363 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77b567bfc7-xnx87" Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.144673 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77b567bfc7-xnx87" event={"ID":"9db0cbc1-d370-40dd-90e5-e419c1e5ad38","Type":"ContainerDied","Data":"1f9425b37896afc44c222b3cec7353fe3cd62d63d3493004d6c3ee63776e83bd"} Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.158172 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0f2558fc-5258-4a37-b46c-59ebc33503c6","Type":"ContainerStarted","Data":"9d6986c6c6d33a384ff4a5b7e972fce3aec54cf63d5cb2b8f08bfc0c9f55723c"} Jan 22 10:42:06 crc kubenswrapper[4752]: E0122 10:42:06.162572 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.32:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest\\\"\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"]" pod="openstack/ovsdbserver-sb-0" podUID="0f2558fc-5258-4a37-b46c-59ebc33503c6" Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.163570 4752 generic.go:334] "Generic (PLEG): container finished" podID="b07139f0-058d-4338-88f7-7e42a9aebeb6" containerID="77612c7262246be992ae822a90208e0010be05dc6f651f17c9a7eb55d3728cb8" exitCode=0 Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.163647 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594b65fc49-fjdpq" event={"ID":"b07139f0-058d-4338-88f7-7e42a9aebeb6","Type":"ContainerDied","Data":"77612c7262246be992ae822a90208e0010be05dc6f651f17c9a7eb55d3728cb8"} Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.163669 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594b65fc49-fjdpq" event={"ID":"b07139f0-058d-4338-88f7-7e42a9aebeb6","Type":"ContainerStarted","Data":"cf4d26945648867149d23c0c882b3aa4efc7e429915113929915a7f52c91fd1a"} Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.172704 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vpt5c" event={"ID":"7d696fd8-24f0-4e7a-801b-6376ea06f238","Type":"ContainerStarted","Data":"092b746c921800f1a4cd47e44967c3bb4b3925d8c5e4975f638d42e42c6f452c"} Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.176394 4752 generic.go:334] "Generic (PLEG): container finished" podID="bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83" containerID="f6ab28c5287c1658d5348d98db0d0192fbbc971dcd216668870b702dd59dc903" exitCode=0 Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.176624 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7786d8fd7-9wgbn" event={"ID":"bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83","Type":"ContainerDied","Data":"f6ab28c5287c1658d5348d98db0d0192fbbc971dcd216668870b702dd59dc903"} Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.176657 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7786d8fd7-9wgbn" event={"ID":"bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83","Type":"ContainerStarted","Data":"0dcd0844f8232d24dd819820a35109488a890d6921ca594e9da2fc77cd045293"} Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.187218 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9750781f-e5d3-4106-ac9e-431b017df583","Type":"ContainerStarted","Data":"32076b0542997ce8a1a181f41d634c9f5c1a0003244e2a055203c30a9bb377b7"} Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.189367 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3","Type":"ContainerStarted","Data":"5081f3511ff948bf101b381fe55676e7f9c62be76475b25c2ad5a83d09c62151"} Jan 22 10:42:06 crc kubenswrapper[4752]: E0122 10:42:06.192406 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.32:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest\\\"\"" pod="openstack/rabbitmq-notifications-server-0" podUID="76dee6bc-ab39-4f6c-bc31-6ef18020e5f3" Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.192978 4752 generic.go:334] "Generic (PLEG): container finished" podID="e374cd7c-feeb-428f-8181-2d909a962448" containerID="1302095026ab6ab4bbe6a30bff942d23f1e1241a9d85e608ecf99c47c232df9c" exitCode=0 Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.193041 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c44b4bf7-jmmzr" event={"ID":"e374cd7c-feeb-428f-8181-2d909a962448","Type":"ContainerDied","Data":"1302095026ab6ab4bbe6a30bff942d23f1e1241a9d85e608ecf99c47c232df9c"} Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.193069 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c44b4bf7-jmmzr" event={"ID":"e374cd7c-feeb-428f-8181-2d909a962448","Type":"ContainerStarted","Data":"58c342c62dc90574f775a9b7f5e25d7fbcf58d88576bdd502ce4fe4f7a024622"} Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.203461 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"06081432-1ce8-4002-8522-6e3472acb753","Type":"ContainerStarted","Data":"256fe42478e1e15df93d529bf8d8d38c0de748496ee12ab032b5529da724d55b"} Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.217557 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-548dc7894c-bwj9k" event={"ID":"897f2843-ed58-4f28-9ab1-f8f9e6e50541","Type":"ContainerDied","Data":"92191e692c1f8b68bc557ea65df44fdc60a6d4e1b4cd22c40e1cb4d496ddc4f8"} Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.217675 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-548dc7894c-bwj9k" Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.243327 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-z6f8t" event={"ID":"12e43fc8-ff00-42df-9091-307d6dc3e7d5","Type":"ContainerStarted","Data":"afc34d7ba2fc2ca7e19ab7f7eb907dec7ac671ec1a3392bfc89ed2e68b04222d"} Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.266656 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f64ed2cf-3432-43c8-a8cd-236c09d5adb3","Type":"ContainerStarted","Data":"1472725514fbe49f03c29d33f7b9c87e3dc79f1eca8d478c4509f9d8a50f42b6"} Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.268504 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3bcf8869-ab22-4792-bb7a-72fc6887d091","Type":"ContainerStarted","Data":"e2ba05d22e47139ba04d3f78280270d3f71ee4f1993664268e61afbe65282261"} Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.268659 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.273557 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"37c7ae68-7fae-44ab-bb3a-f838bdc15bea","Type":"ContainerStarted","Data":"560b3410955d2425ad355e6a84f82f9917f0188e7e81c37060502857ee8fc0bc"} Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.303793 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77b567bfc7-xnx87"] Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.329051 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77b567bfc7-xnx87"] Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.402570 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-548dc7894c-bwj9k"] Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.424887 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-548dc7894c-bwj9k"] Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.433357 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=14.755361668 podStartE2EDuration="16.433337691s" podCreationTimestamp="2026-01-22 10:41:50 +0000 UTC" firstStartedPulling="2026-01-22 10:42:03.714981452 +0000 UTC m=+1002.944924380" lastFinishedPulling="2026-01-22 10:42:05.392957495 +0000 UTC m=+1004.622900403" observedRunningTime="2026-01-22 10:42:06.383628031 +0000 UTC m=+1005.613570939" watchObservedRunningTime="2026-01-22 10:42:06.433337691 +0000 UTC m=+1005.663280599" Jan 22 10:42:06 crc kubenswrapper[4752]: E0122 10:42:06.670971 4752 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 22 10:42:06 crc kubenswrapper[4752]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 22 10:42:06 crc kubenswrapper[4752]: > podSandboxID="0dcd0844f8232d24dd819820a35109488a890d6921ca594e9da2fc77cd045293" Jan 22 10:42:06 crc kubenswrapper[4752]: E0122 10:42:06.671409 4752 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 22 10:42:06 crc kubenswrapper[4752]: container &Container{Name:dnsmasq-dns,Image:38.102.83.32:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pbt2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7786d8fd7-9wgbn_openstack(bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 22 10:42:06 crc kubenswrapper[4752]: > logger="UnhandledError" Jan 22 10:42:06 crc kubenswrapper[4752]: E0122 10:42:06.672898 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-7786d8fd7-9wgbn" podUID="bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83" Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.715665 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c44b4bf7-jmmzr" Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.871179 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql75k\" (UniqueName: \"kubernetes.io/projected/e374cd7c-feeb-428f-8181-2d909a962448-kube-api-access-ql75k\") pod \"e374cd7c-feeb-428f-8181-2d909a962448\" (UID: \"e374cd7c-feeb-428f-8181-2d909a962448\") " Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.871253 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e374cd7c-feeb-428f-8181-2d909a962448-dns-svc\") pod \"e374cd7c-feeb-428f-8181-2d909a962448\" (UID: \"e374cd7c-feeb-428f-8181-2d909a962448\") " Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.871341 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e374cd7c-feeb-428f-8181-2d909a962448-config\") pod \"e374cd7c-feeb-428f-8181-2d909a962448\" (UID: \"e374cd7c-feeb-428f-8181-2d909a962448\") " Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.876311 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e374cd7c-feeb-428f-8181-2d909a962448-kube-api-access-ql75k" (OuterVolumeSpecName: "kube-api-access-ql75k") pod "e374cd7c-feeb-428f-8181-2d909a962448" (UID: "e374cd7c-feeb-428f-8181-2d909a962448"). InnerVolumeSpecName "kube-api-access-ql75k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.891715 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e374cd7c-feeb-428f-8181-2d909a962448-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e374cd7c-feeb-428f-8181-2d909a962448" (UID: "e374cd7c-feeb-428f-8181-2d909a962448"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.906139 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e374cd7c-feeb-428f-8181-2d909a962448-config" (OuterVolumeSpecName: "config") pod "e374cd7c-feeb-428f-8181-2d909a962448" (UID: "e374cd7c-feeb-428f-8181-2d909a962448"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.972762 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ql75k\" (UniqueName: \"kubernetes.io/projected/e374cd7c-feeb-428f-8181-2d909a962448-kube-api-access-ql75k\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.972809 4752 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e374cd7c-feeb-428f-8181-2d909a962448-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:06 crc kubenswrapper[4752]: I0122 10:42:06.972820 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e374cd7c-feeb-428f-8181-2d909a962448-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:07 crc kubenswrapper[4752]: I0122 10:42:07.116832 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="897f2843-ed58-4f28-9ab1-f8f9e6e50541" path="/var/lib/kubelet/pods/897f2843-ed58-4f28-9ab1-f8f9e6e50541/volumes" Jan 22 10:42:07 crc kubenswrapper[4752]: I0122 10:42:07.117191 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9db0cbc1-d370-40dd-90e5-e419c1e5ad38" path="/var/lib/kubelet/pods/9db0cbc1-d370-40dd-90e5-e419c1e5ad38/volumes" Jan 22 10:42:07 crc kubenswrapper[4752]: I0122 10:42:07.292839 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c44b4bf7-jmmzr" event={"ID":"e374cd7c-feeb-428f-8181-2d909a962448","Type":"ContainerDied","Data":"58c342c62dc90574f775a9b7f5e25d7fbcf58d88576bdd502ce4fe4f7a024622"} Jan 22 10:42:07 crc kubenswrapper[4752]: I0122 10:42:07.292897 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c44b4bf7-jmmzr" Jan 22 10:42:07 crc kubenswrapper[4752]: I0122 10:42:07.292932 4752 scope.go:117] "RemoveContainer" containerID="1302095026ab6ab4bbe6a30bff942d23f1e1241a9d85e608ecf99c47c232df9c" Jan 22 10:42:07 crc kubenswrapper[4752]: I0122 10:42:07.294415 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3321ba2b-722c-42c7-aa42-0325165b9437","Type":"ContainerStarted","Data":"d5887fb8dc7044469084707fff96d00b2dd4d31b8bbf000b41256ba0098148b5"} Jan 22 10:42:07 crc kubenswrapper[4752]: I0122 10:42:07.298497 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594b65fc49-fjdpq" event={"ID":"b07139f0-058d-4338-88f7-7e42a9aebeb6","Type":"ContainerStarted","Data":"66f7de85fc6f57e1f62c974c7245937ed7202eaa35add55eb0dab5d14f726cfb"} Jan 22 10:42:07 crc kubenswrapper[4752]: E0122 10:42:07.305092 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.32:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest\\\"\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"]" pod="openstack/ovsdbserver-sb-0" podUID="0f2558fc-5258-4a37-b46c-59ebc33503c6" Jan 22 10:42:07 crc kubenswrapper[4752]: E0122 10:42:07.305191 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.32:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="9356406a-3c6e-4af1-a8bb-92244286ba39" Jan 22 10:42:07 crc kubenswrapper[4752]: E0122 10:42:07.305232 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.32:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest\\\"\"" pod="openstack/rabbitmq-notifications-server-0" podUID="76dee6bc-ab39-4f6c-bc31-6ef18020e5f3" Jan 22 10:42:07 crc kubenswrapper[4752]: I0122 10:42:07.364793 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76c44b4bf7-jmmzr"] Jan 22 10:42:07 crc kubenswrapper[4752]: I0122 10:42:07.384204 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76c44b4bf7-jmmzr"] Jan 22 10:42:07 crc kubenswrapper[4752]: I0122 10:42:07.399936 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-594b65fc49-fjdpq" podStartSLOduration=21.255036787 podStartE2EDuration="21.399916863s" podCreationTimestamp="2026-01-22 10:41:46 +0000 UTC" firstStartedPulling="2026-01-22 10:42:05.243108268 +0000 UTC m=+1004.473051176" lastFinishedPulling="2026-01-22 10:42:05.387988344 +0000 UTC m=+1004.617931252" observedRunningTime="2026-01-22 10:42:07.385530484 +0000 UTC m=+1006.615473412" watchObservedRunningTime="2026-01-22 10:42:07.399916863 +0000 UTC m=+1006.629859771" Jan 22 10:42:08 crc kubenswrapper[4752]: I0122 10:42:08.309720 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7786d8fd7-9wgbn" event={"ID":"bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83","Type":"ContainerStarted","Data":"bf2aabaab4385ac9c86e70574418e5fc3f38575b717933c0f4bd25da3c0fe098"} Jan 22 10:42:08 crc kubenswrapper[4752]: I0122 10:42:08.310095 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-594b65fc49-fjdpq" Jan 22 10:42:08 crc kubenswrapper[4752]: I0122 10:42:08.310302 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7786d8fd7-9wgbn" Jan 22 10:42:08 crc kubenswrapper[4752]: I0122 10:42:08.339587 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7786d8fd7-9wgbn" podStartSLOduration=22.191152166 podStartE2EDuration="22.339565356s" podCreationTimestamp="2026-01-22 10:41:46 +0000 UTC" firstStartedPulling="2026-01-22 10:42:05.245786108 +0000 UTC m=+1004.475729016" lastFinishedPulling="2026-01-22 10:42:05.394199298 +0000 UTC m=+1004.624142206" observedRunningTime="2026-01-22 10:42:08.332328996 +0000 UTC m=+1007.562271904" watchObservedRunningTime="2026-01-22 10:42:08.339565356 +0000 UTC m=+1007.569508274" Jan 22 10:42:09 crc kubenswrapper[4752]: I0122 10:42:09.107806 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e374cd7c-feeb-428f-8181-2d909a962448" path="/var/lib/kubelet/pods/e374cd7c-feeb-428f-8181-2d909a962448/volumes" Jan 22 10:42:11 crc kubenswrapper[4752]: I0122 10:42:11.325603 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 22 10:42:12 crc kubenswrapper[4752]: I0122 10:42:12.159174 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-594b65fc49-fjdpq" Jan 22 10:42:12 crc kubenswrapper[4752]: I0122 10:42:12.236907 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7786d8fd7-9wgbn"] Jan 22 10:42:12 crc kubenswrapper[4752]: I0122 10:42:12.237793 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7786d8fd7-9wgbn" podUID="bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83" containerName="dnsmasq-dns" containerID="cri-o://bf2aabaab4385ac9c86e70574418e5fc3f38575b717933c0f4bd25da3c0fe098" gracePeriod=10 Jan 22 10:42:12 crc kubenswrapper[4752]: I0122 10:42:12.239400 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7786d8fd7-9wgbn" Jan 22 10:42:13 crc kubenswrapper[4752]: I0122 10:42:13.354804 4752 generic.go:334] "Generic (PLEG): container finished" podID="bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83" containerID="bf2aabaab4385ac9c86e70574418e5fc3f38575b717933c0f4bd25da3c0fe098" exitCode=0 Jan 22 10:42:13 crc kubenswrapper[4752]: I0122 10:42:13.356039 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7786d8fd7-9wgbn" event={"ID":"bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83","Type":"ContainerDied","Data":"bf2aabaab4385ac9c86e70574418e5fc3f38575b717933c0f4bd25da3c0fe098"} Jan 22 10:42:13 crc kubenswrapper[4752]: I0122 10:42:13.508890 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f9bf46d5-7xzb6"] Jan 22 10:42:13 crc kubenswrapper[4752]: E0122 10:42:13.509482 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e374cd7c-feeb-428f-8181-2d909a962448" containerName="init" Jan 22 10:42:13 crc kubenswrapper[4752]: I0122 10:42:13.509498 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="e374cd7c-feeb-428f-8181-2d909a962448" containerName="init" Jan 22 10:42:13 crc kubenswrapper[4752]: I0122 10:42:13.509631 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="e374cd7c-feeb-428f-8181-2d909a962448" containerName="init" Jan 22 10:42:13 crc kubenswrapper[4752]: I0122 10:42:13.512342 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f9bf46d5-7xzb6" Jan 22 10:42:13 crc kubenswrapper[4752]: I0122 10:42:13.528081 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f9bf46d5-7xzb6"] Jan 22 10:42:13 crc kubenswrapper[4752]: I0122 10:42:13.615832 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f26f240-ac69-44d2-9e8a-c119b4a9b8f4-dns-svc\") pod \"dnsmasq-dns-6f9bf46d5-7xzb6\" (UID: \"4f26f240-ac69-44d2-9e8a-c119b4a9b8f4\") " pod="openstack/dnsmasq-dns-6f9bf46d5-7xzb6" Jan 22 10:42:13 crc kubenswrapper[4752]: I0122 10:42:13.615983 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj852\" (UniqueName: \"kubernetes.io/projected/4f26f240-ac69-44d2-9e8a-c119b4a9b8f4-kube-api-access-pj852\") pod \"dnsmasq-dns-6f9bf46d5-7xzb6\" (UID: \"4f26f240-ac69-44d2-9e8a-c119b4a9b8f4\") " pod="openstack/dnsmasq-dns-6f9bf46d5-7xzb6" Jan 22 10:42:13 crc kubenswrapper[4752]: I0122 10:42:13.616063 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f26f240-ac69-44d2-9e8a-c119b4a9b8f4-config\") pod \"dnsmasq-dns-6f9bf46d5-7xzb6\" (UID: \"4f26f240-ac69-44d2-9e8a-c119b4a9b8f4\") " pod="openstack/dnsmasq-dns-6f9bf46d5-7xzb6" Jan 22 10:42:13 crc kubenswrapper[4752]: I0122 10:42:13.717996 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f26f240-ac69-44d2-9e8a-c119b4a9b8f4-dns-svc\") pod \"dnsmasq-dns-6f9bf46d5-7xzb6\" (UID: \"4f26f240-ac69-44d2-9e8a-c119b4a9b8f4\") " pod="openstack/dnsmasq-dns-6f9bf46d5-7xzb6" Jan 22 10:42:13 crc kubenswrapper[4752]: I0122 10:42:13.718062 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj852\" (UniqueName: \"kubernetes.io/projected/4f26f240-ac69-44d2-9e8a-c119b4a9b8f4-kube-api-access-pj852\") pod \"dnsmasq-dns-6f9bf46d5-7xzb6\" (UID: \"4f26f240-ac69-44d2-9e8a-c119b4a9b8f4\") " pod="openstack/dnsmasq-dns-6f9bf46d5-7xzb6" Jan 22 10:42:13 crc kubenswrapper[4752]: I0122 10:42:13.718093 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f26f240-ac69-44d2-9e8a-c119b4a9b8f4-config\") pod \"dnsmasq-dns-6f9bf46d5-7xzb6\" (UID: \"4f26f240-ac69-44d2-9e8a-c119b4a9b8f4\") " pod="openstack/dnsmasq-dns-6f9bf46d5-7xzb6" Jan 22 10:42:13 crc kubenswrapper[4752]: I0122 10:42:13.720182 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f26f240-ac69-44d2-9e8a-c119b4a9b8f4-config\") pod \"dnsmasq-dns-6f9bf46d5-7xzb6\" (UID: \"4f26f240-ac69-44d2-9e8a-c119b4a9b8f4\") " pod="openstack/dnsmasq-dns-6f9bf46d5-7xzb6" Jan 22 10:42:13 crc kubenswrapper[4752]: I0122 10:42:13.720809 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f26f240-ac69-44d2-9e8a-c119b4a9b8f4-dns-svc\") pod \"dnsmasq-dns-6f9bf46d5-7xzb6\" (UID: \"4f26f240-ac69-44d2-9e8a-c119b4a9b8f4\") " pod="openstack/dnsmasq-dns-6f9bf46d5-7xzb6" Jan 22 10:42:13 crc kubenswrapper[4752]: I0122 10:42:13.753293 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj852\" (UniqueName: \"kubernetes.io/projected/4f26f240-ac69-44d2-9e8a-c119b4a9b8f4-kube-api-access-pj852\") pod \"dnsmasq-dns-6f9bf46d5-7xzb6\" (UID: \"4f26f240-ac69-44d2-9e8a-c119b4a9b8f4\") " pod="openstack/dnsmasq-dns-6f9bf46d5-7xzb6" Jan 22 10:42:13 crc kubenswrapper[4752]: I0122 10:42:13.822703 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7786d8fd7-9wgbn" Jan 22 10:42:13 crc kubenswrapper[4752]: I0122 10:42:13.832160 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f9bf46d5-7xzb6" Jan 22 10:42:13 crc kubenswrapper[4752]: I0122 10:42:13.922051 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbt2t\" (UniqueName: \"kubernetes.io/projected/bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83-kube-api-access-pbt2t\") pod \"bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83\" (UID: \"bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83\") " Jan 22 10:42:13 crc kubenswrapper[4752]: I0122 10:42:13.922726 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83-dns-svc\") pod \"bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83\" (UID: \"bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83\") " Jan 22 10:42:13 crc kubenswrapper[4752]: I0122 10:42:13.923144 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83-config\") pod \"bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83\" (UID: \"bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83\") " Jan 22 10:42:13 crc kubenswrapper[4752]: I0122 10:42:13.938971 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83-kube-api-access-pbt2t" (OuterVolumeSpecName: "kube-api-access-pbt2t") pod "bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83" (UID: "bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83"). InnerVolumeSpecName "kube-api-access-pbt2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:42:13 crc kubenswrapper[4752]: I0122 10:42:13.984437 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83-config" (OuterVolumeSpecName: "config") pod "bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83" (UID: "bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:42:13 crc kubenswrapper[4752]: I0122 10:42:13.987597 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83" (UID: "bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.025670 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbt2t\" (UniqueName: \"kubernetes.io/projected/bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83-kube-api-access-pbt2t\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.025702 4752 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.025713 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.366144 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7786d8fd7-9wgbn" event={"ID":"bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83","Type":"ContainerDied","Data":"0dcd0844f8232d24dd819820a35109488a890d6921ca594e9da2fc77cd045293"} Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.366496 4752 scope.go:117] "RemoveContainer" containerID="bf2aabaab4385ac9c86e70574418e5fc3f38575b717933c0f4bd25da3c0fe098" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.366622 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7786d8fd7-9wgbn" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.400439 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7786d8fd7-9wgbn"] Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.405613 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7786d8fd7-9wgbn"] Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.611828 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 22 10:42:14 crc kubenswrapper[4752]: E0122 10:42:14.612138 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83" containerName="init" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.612155 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83" containerName="init" Jan 22 10:42:14 crc kubenswrapper[4752]: E0122 10:42:14.612188 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83" containerName="dnsmasq-dns" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.612194 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83" containerName="dnsmasq-dns" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.612328 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83" containerName="dnsmasq-dns" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.636587 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.639039 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.639594 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-ss2m2" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.640300 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.640965 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.649312 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.740177 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7b96c9ad-917b-4891-a39f-3f19c92bdd30-etc-swift\") pod \"swift-storage-0\" (UID: \"7b96c9ad-917b-4891-a39f-3f19c92bdd30\") " pod="openstack/swift-storage-0" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.740478 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7b96c9ad-917b-4891-a39f-3f19c92bdd30-cache\") pod \"swift-storage-0\" (UID: \"7b96c9ad-917b-4891-a39f-3f19c92bdd30\") " pod="openstack/swift-storage-0" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.740608 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b96c9ad-917b-4891-a39f-3f19c92bdd30-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"7b96c9ad-917b-4891-a39f-3f19c92bdd30\") " pod="openstack/swift-storage-0" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.740700 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk2fr\" (UniqueName: \"kubernetes.io/projected/7b96c9ad-917b-4891-a39f-3f19c92bdd30-kube-api-access-tk2fr\") pod \"swift-storage-0\" (UID: \"7b96c9ad-917b-4891-a39f-3f19c92bdd30\") " pod="openstack/swift-storage-0" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.740797 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"7b96c9ad-917b-4891-a39f-3f19c92bdd30\") " pod="openstack/swift-storage-0" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.740893 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7b96c9ad-917b-4891-a39f-3f19c92bdd30-lock\") pod \"swift-storage-0\" (UID: \"7b96c9ad-917b-4891-a39f-3f19c92bdd30\") " pod="openstack/swift-storage-0" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.842265 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"7b96c9ad-917b-4891-a39f-3f19c92bdd30\") " pod="openstack/swift-storage-0" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.842308 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7b96c9ad-917b-4891-a39f-3f19c92bdd30-lock\") pod \"swift-storage-0\" (UID: \"7b96c9ad-917b-4891-a39f-3f19c92bdd30\") " pod="openstack/swift-storage-0" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.842394 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7b96c9ad-917b-4891-a39f-3f19c92bdd30-etc-swift\") pod \"swift-storage-0\" (UID: \"7b96c9ad-917b-4891-a39f-3f19c92bdd30\") " pod="openstack/swift-storage-0" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.842410 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7b96c9ad-917b-4891-a39f-3f19c92bdd30-cache\") pod \"swift-storage-0\" (UID: \"7b96c9ad-917b-4891-a39f-3f19c92bdd30\") " pod="openstack/swift-storage-0" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.842441 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b96c9ad-917b-4891-a39f-3f19c92bdd30-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"7b96c9ad-917b-4891-a39f-3f19c92bdd30\") " pod="openstack/swift-storage-0" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.842463 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk2fr\" (UniqueName: \"kubernetes.io/projected/7b96c9ad-917b-4891-a39f-3f19c92bdd30-kube-api-access-tk2fr\") pod \"swift-storage-0\" (UID: \"7b96c9ad-917b-4891-a39f-3f19c92bdd30\") " pod="openstack/swift-storage-0" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.842575 4752 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"7b96c9ad-917b-4891-a39f-3f19c92bdd30\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/swift-storage-0" Jan 22 10:42:14 crc kubenswrapper[4752]: E0122 10:42:14.842795 4752 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 10:42:14 crc kubenswrapper[4752]: E0122 10:42:14.842887 4752 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 10:42:14 crc kubenswrapper[4752]: E0122 10:42:14.843010 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b96c9ad-917b-4891-a39f-3f19c92bdd30-etc-swift podName:7b96c9ad-917b-4891-a39f-3f19c92bdd30 nodeName:}" failed. No retries permitted until 2026-01-22 10:42:15.342990163 +0000 UTC m=+1014.572933071 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7b96c9ad-917b-4891-a39f-3f19c92bdd30-etc-swift") pod "swift-storage-0" (UID: "7b96c9ad-917b-4891-a39f-3f19c92bdd30") : configmap "swift-ring-files" not found Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.843112 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7b96c9ad-917b-4891-a39f-3f19c92bdd30-lock\") pod \"swift-storage-0\" (UID: \"7b96c9ad-917b-4891-a39f-3f19c92bdd30\") " pod="openstack/swift-storage-0" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.843150 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7b96c9ad-917b-4891-a39f-3f19c92bdd30-cache\") pod \"swift-storage-0\" (UID: \"7b96c9ad-917b-4891-a39f-3f19c92bdd30\") " pod="openstack/swift-storage-0" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.850633 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b96c9ad-917b-4891-a39f-3f19c92bdd30-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"7b96c9ad-917b-4891-a39f-3f19c92bdd30\") " pod="openstack/swift-storage-0" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.876644 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk2fr\" (UniqueName: \"kubernetes.io/projected/7b96c9ad-917b-4891-a39f-3f19c92bdd30-kube-api-access-tk2fr\") pod \"swift-storage-0\" (UID: \"7b96c9ad-917b-4891-a39f-3f19c92bdd30\") " pod="openstack/swift-storage-0" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.886191 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"7b96c9ad-917b-4891-a39f-3f19c92bdd30\") " pod="openstack/swift-storage-0" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.915392 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-rkqqc"] Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.916417 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rkqqc" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.920887 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.921105 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.929108 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.950155 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-rkqqc"] Jan 22 10:42:14 crc kubenswrapper[4752]: E0122 10:42:14.950705 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-dnc4h ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-dnc4h ring-data-devices scripts swiftconf]: context canceled" pod="openstack/swift-ring-rebalance-rkqqc" podUID="a218a304-0d0f-47d8-a947-395358ada5fd" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.966128 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-nrt6p"] Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.967273 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nrt6p" Jan 22 10:42:14 crc kubenswrapper[4752]: I0122 10:42:14.984235 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-nrt6p"] Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.000226 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-rkqqc"] Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.045145 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a218a304-0d0f-47d8-a947-395358ada5fd-ring-data-devices\") pod \"swift-ring-rebalance-rkqqc\" (UID: \"a218a304-0d0f-47d8-a947-395358ada5fd\") " pod="openstack/swift-ring-rebalance-rkqqc" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.045218 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/61a33a59-edcf-44fe-97c6-2a397c69c87a-etc-swift\") pod \"swift-ring-rebalance-nrt6p\" (UID: \"61a33a59-edcf-44fe-97c6-2a397c69c87a\") " pod="openstack/swift-ring-rebalance-nrt6p" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.045252 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a218a304-0d0f-47d8-a947-395358ada5fd-swiftconf\") pod \"swift-ring-rebalance-rkqqc\" (UID: \"a218a304-0d0f-47d8-a947-395358ada5fd\") " pod="openstack/swift-ring-rebalance-rkqqc" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.045273 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnc4h\" (UniqueName: \"kubernetes.io/projected/a218a304-0d0f-47d8-a947-395358ada5fd-kube-api-access-dnc4h\") pod \"swift-ring-rebalance-rkqqc\" (UID: \"a218a304-0d0f-47d8-a947-395358ada5fd\") " pod="openstack/swift-ring-rebalance-rkqqc" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.045336 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a218a304-0d0f-47d8-a947-395358ada5fd-dispersionconf\") pod \"swift-ring-rebalance-rkqqc\" (UID: \"a218a304-0d0f-47d8-a947-395358ada5fd\") " pod="openstack/swift-ring-rebalance-rkqqc" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.045414 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rkph\" (UniqueName: \"kubernetes.io/projected/61a33a59-edcf-44fe-97c6-2a397c69c87a-kube-api-access-4rkph\") pod \"swift-ring-rebalance-nrt6p\" (UID: \"61a33a59-edcf-44fe-97c6-2a397c69c87a\") " pod="openstack/swift-ring-rebalance-nrt6p" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.045470 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/61a33a59-edcf-44fe-97c6-2a397c69c87a-swiftconf\") pod \"swift-ring-rebalance-nrt6p\" (UID: \"61a33a59-edcf-44fe-97c6-2a397c69c87a\") " pod="openstack/swift-ring-rebalance-nrt6p" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.045493 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a218a304-0d0f-47d8-a947-395358ada5fd-scripts\") pod \"swift-ring-rebalance-rkqqc\" (UID: \"a218a304-0d0f-47d8-a947-395358ada5fd\") " pod="openstack/swift-ring-rebalance-rkqqc" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.045531 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/61a33a59-edcf-44fe-97c6-2a397c69c87a-dispersionconf\") pod \"swift-ring-rebalance-nrt6p\" (UID: \"61a33a59-edcf-44fe-97c6-2a397c69c87a\") " pod="openstack/swift-ring-rebalance-nrt6p" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.045552 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a218a304-0d0f-47d8-a947-395358ada5fd-combined-ca-bundle\") pod \"swift-ring-rebalance-rkqqc\" (UID: \"a218a304-0d0f-47d8-a947-395358ada5fd\") " pod="openstack/swift-ring-rebalance-rkqqc" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.045580 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/61a33a59-edcf-44fe-97c6-2a397c69c87a-ring-data-devices\") pod \"swift-ring-rebalance-nrt6p\" (UID: \"61a33a59-edcf-44fe-97c6-2a397c69c87a\") " pod="openstack/swift-ring-rebalance-nrt6p" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.045609 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61a33a59-edcf-44fe-97c6-2a397c69c87a-scripts\") pod \"swift-ring-rebalance-nrt6p\" (UID: \"61a33a59-edcf-44fe-97c6-2a397c69c87a\") " pod="openstack/swift-ring-rebalance-nrt6p" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.045635 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61a33a59-edcf-44fe-97c6-2a397c69c87a-combined-ca-bundle\") pod \"swift-ring-rebalance-nrt6p\" (UID: \"61a33a59-edcf-44fe-97c6-2a397c69c87a\") " pod="openstack/swift-ring-rebalance-nrt6p" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.045660 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a218a304-0d0f-47d8-a947-395358ada5fd-etc-swift\") pod \"swift-ring-rebalance-rkqqc\" (UID: \"a218a304-0d0f-47d8-a947-395358ada5fd\") " pod="openstack/swift-ring-rebalance-rkqqc" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.046253 4752 scope.go:117] "RemoveContainer" containerID="f6ab28c5287c1658d5348d98db0d0192fbbc971dcd216668870b702dd59dc903" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.117526 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83" path="/var/lib/kubelet/pods/bf70bf27-0f6e-4e08-8ae1-1b9ad4680f83/volumes" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.147546 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/61a33a59-edcf-44fe-97c6-2a397c69c87a-swiftconf\") pod \"swift-ring-rebalance-nrt6p\" (UID: \"61a33a59-edcf-44fe-97c6-2a397c69c87a\") " pod="openstack/swift-ring-rebalance-nrt6p" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.147595 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a218a304-0d0f-47d8-a947-395358ada5fd-scripts\") pod \"swift-ring-rebalance-rkqqc\" (UID: \"a218a304-0d0f-47d8-a947-395358ada5fd\") " pod="openstack/swift-ring-rebalance-rkqqc" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.147645 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/61a33a59-edcf-44fe-97c6-2a397c69c87a-dispersionconf\") pod \"swift-ring-rebalance-nrt6p\" (UID: \"61a33a59-edcf-44fe-97c6-2a397c69c87a\") " pod="openstack/swift-ring-rebalance-nrt6p" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.147669 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a218a304-0d0f-47d8-a947-395358ada5fd-combined-ca-bundle\") pod \"swift-ring-rebalance-rkqqc\" (UID: \"a218a304-0d0f-47d8-a947-395358ada5fd\") " pod="openstack/swift-ring-rebalance-rkqqc" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.147700 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/61a33a59-edcf-44fe-97c6-2a397c69c87a-ring-data-devices\") pod \"swift-ring-rebalance-nrt6p\" (UID: \"61a33a59-edcf-44fe-97c6-2a397c69c87a\") " pod="openstack/swift-ring-rebalance-nrt6p" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.147730 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61a33a59-edcf-44fe-97c6-2a397c69c87a-scripts\") pod \"swift-ring-rebalance-nrt6p\" (UID: \"61a33a59-edcf-44fe-97c6-2a397c69c87a\") " pod="openstack/swift-ring-rebalance-nrt6p" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.147753 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61a33a59-edcf-44fe-97c6-2a397c69c87a-combined-ca-bundle\") pod \"swift-ring-rebalance-nrt6p\" (UID: \"61a33a59-edcf-44fe-97c6-2a397c69c87a\") " pod="openstack/swift-ring-rebalance-nrt6p" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.147777 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a218a304-0d0f-47d8-a947-395358ada5fd-etc-swift\") pod \"swift-ring-rebalance-rkqqc\" (UID: \"a218a304-0d0f-47d8-a947-395358ada5fd\") " pod="openstack/swift-ring-rebalance-rkqqc" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.147838 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a218a304-0d0f-47d8-a947-395358ada5fd-ring-data-devices\") pod \"swift-ring-rebalance-rkqqc\" (UID: \"a218a304-0d0f-47d8-a947-395358ada5fd\") " pod="openstack/swift-ring-rebalance-rkqqc" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.147893 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/61a33a59-edcf-44fe-97c6-2a397c69c87a-etc-swift\") pod \"swift-ring-rebalance-nrt6p\" (UID: \"61a33a59-edcf-44fe-97c6-2a397c69c87a\") " pod="openstack/swift-ring-rebalance-nrt6p" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.147919 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a218a304-0d0f-47d8-a947-395358ada5fd-swiftconf\") pod \"swift-ring-rebalance-rkqqc\" (UID: \"a218a304-0d0f-47d8-a947-395358ada5fd\") " pod="openstack/swift-ring-rebalance-rkqqc" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.147944 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnc4h\" (UniqueName: \"kubernetes.io/projected/a218a304-0d0f-47d8-a947-395358ada5fd-kube-api-access-dnc4h\") pod \"swift-ring-rebalance-rkqqc\" (UID: \"a218a304-0d0f-47d8-a947-395358ada5fd\") " pod="openstack/swift-ring-rebalance-rkqqc" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.147995 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a218a304-0d0f-47d8-a947-395358ada5fd-dispersionconf\") pod \"swift-ring-rebalance-rkqqc\" (UID: \"a218a304-0d0f-47d8-a947-395358ada5fd\") " pod="openstack/swift-ring-rebalance-rkqqc" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.148035 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rkph\" (UniqueName: \"kubernetes.io/projected/61a33a59-edcf-44fe-97c6-2a397c69c87a-kube-api-access-4rkph\") pod \"swift-ring-rebalance-nrt6p\" (UID: \"61a33a59-edcf-44fe-97c6-2a397c69c87a\") " pod="openstack/swift-ring-rebalance-nrt6p" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.148571 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61a33a59-edcf-44fe-97c6-2a397c69c87a-scripts\") pod \"swift-ring-rebalance-nrt6p\" (UID: \"61a33a59-edcf-44fe-97c6-2a397c69c87a\") " pod="openstack/swift-ring-rebalance-nrt6p" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.148715 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/61a33a59-edcf-44fe-97c6-2a397c69c87a-ring-data-devices\") pod \"swift-ring-rebalance-nrt6p\" (UID: \"61a33a59-edcf-44fe-97c6-2a397c69c87a\") " pod="openstack/swift-ring-rebalance-nrt6p" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.148711 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/61a33a59-edcf-44fe-97c6-2a397c69c87a-etc-swift\") pod \"swift-ring-rebalance-nrt6p\" (UID: \"61a33a59-edcf-44fe-97c6-2a397c69c87a\") " pod="openstack/swift-ring-rebalance-nrt6p" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.149115 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a218a304-0d0f-47d8-a947-395358ada5fd-etc-swift\") pod \"swift-ring-rebalance-rkqqc\" (UID: \"a218a304-0d0f-47d8-a947-395358ada5fd\") " pod="openstack/swift-ring-rebalance-rkqqc" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.149256 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a218a304-0d0f-47d8-a947-395358ada5fd-ring-data-devices\") pod \"swift-ring-rebalance-rkqqc\" (UID: \"a218a304-0d0f-47d8-a947-395358ada5fd\") " pod="openstack/swift-ring-rebalance-rkqqc" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.149523 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a218a304-0d0f-47d8-a947-395358ada5fd-scripts\") pod \"swift-ring-rebalance-rkqqc\" (UID: \"a218a304-0d0f-47d8-a947-395358ada5fd\") " pod="openstack/swift-ring-rebalance-rkqqc" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.151801 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/61a33a59-edcf-44fe-97c6-2a397c69c87a-dispersionconf\") pod \"swift-ring-rebalance-nrt6p\" (UID: \"61a33a59-edcf-44fe-97c6-2a397c69c87a\") " pod="openstack/swift-ring-rebalance-nrt6p" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.152229 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61a33a59-edcf-44fe-97c6-2a397c69c87a-combined-ca-bundle\") pod \"swift-ring-rebalance-nrt6p\" (UID: \"61a33a59-edcf-44fe-97c6-2a397c69c87a\") " pod="openstack/swift-ring-rebalance-nrt6p" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.152411 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/61a33a59-edcf-44fe-97c6-2a397c69c87a-swiftconf\") pod \"swift-ring-rebalance-nrt6p\" (UID: \"61a33a59-edcf-44fe-97c6-2a397c69c87a\") " pod="openstack/swift-ring-rebalance-nrt6p" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.152888 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a218a304-0d0f-47d8-a947-395358ada5fd-dispersionconf\") pod \"swift-ring-rebalance-rkqqc\" (UID: \"a218a304-0d0f-47d8-a947-395358ada5fd\") " pod="openstack/swift-ring-rebalance-rkqqc" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.153006 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a218a304-0d0f-47d8-a947-395358ada5fd-combined-ca-bundle\") pod \"swift-ring-rebalance-rkqqc\" (UID: \"a218a304-0d0f-47d8-a947-395358ada5fd\") " pod="openstack/swift-ring-rebalance-rkqqc" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.160966 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a218a304-0d0f-47d8-a947-395358ada5fd-swiftconf\") pod \"swift-ring-rebalance-rkqqc\" (UID: \"a218a304-0d0f-47d8-a947-395358ada5fd\") " pod="openstack/swift-ring-rebalance-rkqqc" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.163829 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rkph\" (UniqueName: \"kubernetes.io/projected/61a33a59-edcf-44fe-97c6-2a397c69c87a-kube-api-access-4rkph\") pod \"swift-ring-rebalance-nrt6p\" (UID: \"61a33a59-edcf-44fe-97c6-2a397c69c87a\") " pod="openstack/swift-ring-rebalance-nrt6p" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.170400 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnc4h\" (UniqueName: \"kubernetes.io/projected/a218a304-0d0f-47d8-a947-395358ada5fd-kube-api-access-dnc4h\") pod \"swift-ring-rebalance-rkqqc\" (UID: \"a218a304-0d0f-47d8-a947-395358ada5fd\") " pod="openstack/swift-ring-rebalance-rkqqc" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.282849 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nrt6p" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.352097 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7b96c9ad-917b-4891-a39f-3f19c92bdd30-etc-swift\") pod \"swift-storage-0\" (UID: \"7b96c9ad-917b-4891-a39f-3f19c92bdd30\") " pod="openstack/swift-storage-0" Jan 22 10:42:15 crc kubenswrapper[4752]: E0122 10:42:15.352274 4752 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 10:42:15 crc kubenswrapper[4752]: E0122 10:42:15.352303 4752 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 10:42:15 crc kubenswrapper[4752]: E0122 10:42:15.352367 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b96c9ad-917b-4891-a39f-3f19c92bdd30-etc-swift podName:7b96c9ad-917b-4891-a39f-3f19c92bdd30 nodeName:}" failed. No retries permitted until 2026-01-22 10:42:16.352347601 +0000 UTC m=+1015.582290509 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7b96c9ad-917b-4891-a39f-3f19c92bdd30-etc-swift") pod "swift-storage-0" (UID: "7b96c9ad-917b-4891-a39f-3f19c92bdd30") : configmap "swift-ring-files" not found Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.376722 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rkqqc" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.389909 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rkqqc" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.453758 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a218a304-0d0f-47d8-a947-395358ada5fd-combined-ca-bundle\") pod \"a218a304-0d0f-47d8-a947-395358ada5fd\" (UID: \"a218a304-0d0f-47d8-a947-395358ada5fd\") " Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.453811 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a218a304-0d0f-47d8-a947-395358ada5fd-etc-swift\") pod \"a218a304-0d0f-47d8-a947-395358ada5fd\" (UID: \"a218a304-0d0f-47d8-a947-395358ada5fd\") " Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.453886 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a218a304-0d0f-47d8-a947-395358ada5fd-dispersionconf\") pod \"a218a304-0d0f-47d8-a947-395358ada5fd\" (UID: \"a218a304-0d0f-47d8-a947-395358ada5fd\") " Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.453954 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a218a304-0d0f-47d8-a947-395358ada5fd-swiftconf\") pod \"a218a304-0d0f-47d8-a947-395358ada5fd\" (UID: \"a218a304-0d0f-47d8-a947-395358ada5fd\") " Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.454019 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnc4h\" (UniqueName: \"kubernetes.io/projected/a218a304-0d0f-47d8-a947-395358ada5fd-kube-api-access-dnc4h\") pod \"a218a304-0d0f-47d8-a947-395358ada5fd\" (UID: \"a218a304-0d0f-47d8-a947-395358ada5fd\") " Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.454057 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a218a304-0d0f-47d8-a947-395358ada5fd-scripts\") pod \"a218a304-0d0f-47d8-a947-395358ada5fd\" (UID: \"a218a304-0d0f-47d8-a947-395358ada5fd\") " Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.454081 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a218a304-0d0f-47d8-a947-395358ada5fd-ring-data-devices\") pod \"a218a304-0d0f-47d8-a947-395358ada5fd\" (UID: \"a218a304-0d0f-47d8-a947-395358ada5fd\") " Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.454770 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a218a304-0d0f-47d8-a947-395358ada5fd-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "a218a304-0d0f-47d8-a947-395358ada5fd" (UID: "a218a304-0d0f-47d8-a947-395358ada5fd"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.454931 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a218a304-0d0f-47d8-a947-395358ada5fd-scripts" (OuterVolumeSpecName: "scripts") pod "a218a304-0d0f-47d8-a947-395358ada5fd" (UID: "a218a304-0d0f-47d8-a947-395358ada5fd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.455013 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a218a304-0d0f-47d8-a947-395358ada5fd-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "a218a304-0d0f-47d8-a947-395358ada5fd" (UID: "a218a304-0d0f-47d8-a947-395358ada5fd"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.460070 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a218a304-0d0f-47d8-a947-395358ada5fd-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "a218a304-0d0f-47d8-a947-395358ada5fd" (UID: "a218a304-0d0f-47d8-a947-395358ada5fd"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.460103 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a218a304-0d0f-47d8-a947-395358ada5fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a218a304-0d0f-47d8-a947-395358ada5fd" (UID: "a218a304-0d0f-47d8-a947-395358ada5fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.460143 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a218a304-0d0f-47d8-a947-395358ada5fd-kube-api-access-dnc4h" (OuterVolumeSpecName: "kube-api-access-dnc4h") pod "a218a304-0d0f-47d8-a947-395358ada5fd" (UID: "a218a304-0d0f-47d8-a947-395358ada5fd"). InnerVolumeSpecName "kube-api-access-dnc4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.460192 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a218a304-0d0f-47d8-a947-395358ada5fd-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "a218a304-0d0f-47d8-a947-395358ada5fd" (UID: "a218a304-0d0f-47d8-a947-395358ada5fd"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.555846 4752 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a218a304-0d0f-47d8-a947-395358ada5fd-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.555891 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnc4h\" (UniqueName: \"kubernetes.io/projected/a218a304-0d0f-47d8-a947-395358ada5fd-kube-api-access-dnc4h\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.555917 4752 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a218a304-0d0f-47d8-a947-395358ada5fd-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.555927 4752 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a218a304-0d0f-47d8-a947-395358ada5fd-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.555936 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a218a304-0d0f-47d8-a947-395358ada5fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.555944 4752 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a218a304-0d0f-47d8-a947-395358ada5fd-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:15 crc kubenswrapper[4752]: I0122 10:42:15.555952 4752 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a218a304-0d0f-47d8-a947-395358ada5fd-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:16 crc kubenswrapper[4752]: I0122 10:42:16.366151 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f9bf46d5-7xzb6"] Jan 22 10:42:16 crc kubenswrapper[4752]: I0122 10:42:16.378555 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7b96c9ad-917b-4891-a39f-3f19c92bdd30-etc-swift\") pod \"swift-storage-0\" (UID: \"7b96c9ad-917b-4891-a39f-3f19c92bdd30\") " pod="openstack/swift-storage-0" Jan 22 10:42:16 crc kubenswrapper[4752]: E0122 10:42:16.378756 4752 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 10:42:16 crc kubenswrapper[4752]: E0122 10:42:16.378778 4752 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 10:42:16 crc kubenswrapper[4752]: E0122 10:42:16.378847 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b96c9ad-917b-4891-a39f-3f19c92bdd30-etc-swift podName:7b96c9ad-917b-4891-a39f-3f19c92bdd30 nodeName:}" failed. No retries permitted until 2026-01-22 10:42:18.378824762 +0000 UTC m=+1017.608767670 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7b96c9ad-917b-4891-a39f-3f19c92bdd30-etc-swift") pod "swift-storage-0" (UID: "7b96c9ad-917b-4891-a39f-3f19c92bdd30") : configmap "swift-ring-files" not found Jan 22 10:42:16 crc kubenswrapper[4752]: I0122 10:42:16.390372 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rkqqc" Jan 22 10:42:16 crc kubenswrapper[4752]: I0122 10:42:16.475072 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-rkqqc"] Jan 22 10:42:16 crc kubenswrapper[4752]: W0122 10:42:16.478572 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f26f240_ac69_44d2_9e8a_c119b4a9b8f4.slice/crio-4ec333b3999a4d9f624fbf0b2874eed9e12682a1dc82e403f83375c1e5961d06 WatchSource:0}: Error finding container 4ec333b3999a4d9f624fbf0b2874eed9e12682a1dc82e403f83375c1e5961d06: Status 404 returned error can't find the container with id 4ec333b3999a4d9f624fbf0b2874eed9e12682a1dc82e403f83375c1e5961d06 Jan 22 10:42:16 crc kubenswrapper[4752]: I0122 10:42:16.481192 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-rkqqc"] Jan 22 10:42:17 crc kubenswrapper[4752]: I0122 10:42:17.111648 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a218a304-0d0f-47d8-a947-395358ada5fd" path="/var/lib/kubelet/pods/a218a304-0d0f-47d8-a947-395358ada5fd/volumes" Jan 22 10:42:17 crc kubenswrapper[4752]: I0122 10:42:17.114773 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-nrt6p"] Jan 22 10:42:17 crc kubenswrapper[4752]: W0122 10:42:17.163742 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61a33a59_edcf_44fe_97c6_2a397c69c87a.slice/crio-103a8a9614f957cec9549467b949e6fd556471fb3dbd5c6684509aa93453ac1c WatchSource:0}: Error finding container 103a8a9614f957cec9549467b949e6fd556471fb3dbd5c6684509aa93453ac1c: Status 404 returned error can't find the container with id 103a8a9614f957cec9549467b949e6fd556471fb3dbd5c6684509aa93453ac1c Jan 22 10:42:17 crc kubenswrapper[4752]: I0122 10:42:17.401776 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f9bf46d5-7xzb6" event={"ID":"4f26f240-ac69-44d2-9e8a-c119b4a9b8f4","Type":"ContainerStarted","Data":"4ec333b3999a4d9f624fbf0b2874eed9e12682a1dc82e403f83375c1e5961d06"} Jan 22 10:42:17 crc kubenswrapper[4752]: I0122 10:42:17.403620 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nrt6p" event={"ID":"61a33a59-edcf-44fe-97c6-2a397c69c87a","Type":"ContainerStarted","Data":"103a8a9614f957cec9549467b949e6fd556471fb3dbd5c6684509aa93453ac1c"} Jan 22 10:42:18 crc kubenswrapper[4752]: I0122 10:42:18.412752 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7b96c9ad-917b-4891-a39f-3f19c92bdd30-etc-swift\") pod \"swift-storage-0\" (UID: \"7b96c9ad-917b-4891-a39f-3f19c92bdd30\") " pod="openstack/swift-storage-0" Jan 22 10:42:18 crc kubenswrapper[4752]: E0122 10:42:18.412962 4752 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 10:42:18 crc kubenswrapper[4752]: E0122 10:42:18.413151 4752 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 10:42:18 crc kubenswrapper[4752]: E0122 10:42:18.413192 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b96c9ad-917b-4891-a39f-3f19c92bdd30-etc-swift podName:7b96c9ad-917b-4891-a39f-3f19c92bdd30 nodeName:}" failed. No retries permitted until 2026-01-22 10:42:22.413179201 +0000 UTC m=+1021.643122109 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7b96c9ad-917b-4891-a39f-3f19c92bdd30-etc-swift") pod "swift-storage-0" (UID: "7b96c9ad-917b-4891-a39f-3f19c92bdd30") : configmap "swift-ring-files" not found Jan 22 10:42:18 crc kubenswrapper[4752]: I0122 10:42:18.417968 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f64ed2cf-3432-43c8-a8cd-236c09d5adb3","Type":"ContainerStarted","Data":"8931603de063fe8a4149f608813e0cbf9e28671d7443e511a9928898453deafd"} Jan 22 10:42:18 crc kubenswrapper[4752]: I0122 10:42:18.421546 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"06081432-1ce8-4002-8522-6e3472acb753","Type":"ContainerStarted","Data":"4dfe308993776e72d58d634a0bb55ceaabe0a06e5306509cc03d76ba3c3e734c"} Jan 22 10:42:18 crc kubenswrapper[4752]: I0122 10:42:18.423329 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vpt5c" event={"ID":"7d696fd8-24f0-4e7a-801b-6376ea06f238","Type":"ContainerStarted","Data":"985e6dd4f3bdbb371cd8622d9093682393118452ccaff03f71d26a2214f0fd57"} Jan 22 10:42:18 crc kubenswrapper[4752]: I0122 10:42:18.423907 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-vpt5c" Jan 22 10:42:18 crc kubenswrapper[4752]: I0122 10:42:18.426885 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"37c7ae68-7fae-44ab-bb3a-f838bdc15bea","Type":"ContainerStarted","Data":"32669c604b914124005a7643850c663237813186676b523d6c7223541b4d984b"} Jan 22 10:42:18 crc kubenswrapper[4752]: I0122 10:42:18.426963 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 22 10:42:18 crc kubenswrapper[4752]: I0122 10:42:18.433479 4752 generic.go:334] "Generic (PLEG): container finished" podID="4f26f240-ac69-44d2-9e8a-c119b4a9b8f4" containerID="39b5775e187bd7f154652a27cbdabd622f9153607541a05ef33604b10fe848a9" exitCode=0 Jan 22 10:42:18 crc kubenswrapper[4752]: I0122 10:42:18.433523 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f9bf46d5-7xzb6" event={"ID":"4f26f240-ac69-44d2-9e8a-c119b4a9b8f4","Type":"ContainerDied","Data":"39b5775e187bd7f154652a27cbdabd622f9153607541a05ef33604b10fe848a9"} Jan 22 10:42:18 crc kubenswrapper[4752]: I0122 10:42:18.445003 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-z6f8t" event={"ID":"12e43fc8-ff00-42df-9091-307d6dc3e7d5","Type":"ContainerStarted","Data":"87b6c238f84f2d0115cb0a94fb4174f3b1314dd8f5fcf8ff4be9471e261f5d55"} Jan 22 10:42:18 crc kubenswrapper[4752]: I0122 10:42:18.450116 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52","Type":"ContainerStarted","Data":"b552ddb1cbe37b0846222181a70e2b5fabda4c2e97937ec8510628d7daab0110"} Jan 22 10:42:18 crc kubenswrapper[4752]: I0122 10:42:18.454793 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3321ba2b-722c-42c7-aa42-0325165b9437","Type":"ContainerStarted","Data":"260bc72bcc8a9716ee4e116188629b432585d51f3b94ee7ac5859eefc4b80f70"} Jan 22 10:42:18 crc kubenswrapper[4752]: I0122 10:42:18.463440 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-vpt5c" podStartSLOduration=11.829386166 podStartE2EDuration="22.463419705s" podCreationTimestamp="2026-01-22 10:41:56 +0000 UTC" firstStartedPulling="2026-01-22 10:42:05.270086258 +0000 UTC m=+1004.500029196" lastFinishedPulling="2026-01-22 10:42:15.904119827 +0000 UTC m=+1015.134062735" observedRunningTime="2026-01-22 10:42:18.459245325 +0000 UTC m=+1017.689188233" watchObservedRunningTime="2026-01-22 10:42:18.463419705 +0000 UTC m=+1017.693362613" Jan 22 10:42:18 crc kubenswrapper[4752]: I0122 10:42:18.499535 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=15.133029293 podStartE2EDuration="26.499517676s" podCreationTimestamp="2026-01-22 10:41:52 +0000 UTC" firstStartedPulling="2026-01-22 10:42:05.263916986 +0000 UTC m=+1004.493859894" lastFinishedPulling="2026-01-22 10:42:16.630405369 +0000 UTC m=+1015.860348277" observedRunningTime="2026-01-22 10:42:18.49512146 +0000 UTC m=+1017.725064368" watchObservedRunningTime="2026-01-22 10:42:18.499517676 +0000 UTC m=+1017.729460584" Jan 22 10:42:19 crc kubenswrapper[4752]: I0122 10:42:19.481011 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9750781f-e5d3-4106-ac9e-431b017df583","Type":"ContainerStarted","Data":"2b5c2d1fca13358092182d00f7908299fdf8e03ff7a5c1c1f7fca3e7d84a462a"} Jan 22 10:42:19 crc kubenswrapper[4752]: I0122 10:42:19.482696 4752 generic.go:334] "Generic (PLEG): container finished" podID="12e43fc8-ff00-42df-9091-307d6dc3e7d5" containerID="87b6c238f84f2d0115cb0a94fb4174f3b1314dd8f5fcf8ff4be9471e261f5d55" exitCode=0 Jan 22 10:42:19 crc kubenswrapper[4752]: I0122 10:42:19.482778 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-z6f8t" event={"ID":"12e43fc8-ff00-42df-9091-307d6dc3e7d5","Type":"ContainerDied","Data":"87b6c238f84f2d0115cb0a94fb4174f3b1314dd8f5fcf8ff4be9471e261f5d55"} Jan 22 10:42:22 crc kubenswrapper[4752]: I0122 10:42:22.499872 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7b96c9ad-917b-4891-a39f-3f19c92bdd30-etc-swift\") pod \"swift-storage-0\" (UID: \"7b96c9ad-917b-4891-a39f-3f19c92bdd30\") " pod="openstack/swift-storage-0" Jan 22 10:42:22 crc kubenswrapper[4752]: E0122 10:42:22.500783 4752 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 10:42:22 crc kubenswrapper[4752]: E0122 10:42:22.500808 4752 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 10:42:22 crc kubenswrapper[4752]: E0122 10:42:22.500902 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b96c9ad-917b-4891-a39f-3f19c92bdd30-etc-swift podName:7b96c9ad-917b-4891-a39f-3f19c92bdd30 nodeName:}" failed. No retries permitted until 2026-01-22 10:42:30.500851582 +0000 UTC m=+1029.730794520 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7b96c9ad-917b-4891-a39f-3f19c92bdd30-etc-swift") pod "swift-storage-0" (UID: "7b96c9ad-917b-4891-a39f-3f19c92bdd30") : configmap "swift-ring-files" not found Jan 22 10:42:23 crc kubenswrapper[4752]: I0122 10:42:23.541004 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3","Type":"ContainerStarted","Data":"4c31a2ea966d9d658601982b507b6cfee498c25edc562fe2047f227515e40f2b"} Jan 22 10:42:23 crc kubenswrapper[4752]: I0122 10:42:23.544215 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f9bf46d5-7xzb6" event={"ID":"4f26f240-ac69-44d2-9e8a-c119b4a9b8f4","Type":"ContainerStarted","Data":"270612d024ba19436c3e4cc9709171bb8a6aea8c01e882181ca33201cb66c990"} Jan 22 10:42:23 crc kubenswrapper[4752]: I0122 10:42:23.544380 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f9bf46d5-7xzb6" Jan 22 10:42:23 crc kubenswrapper[4752]: I0122 10:42:23.547895 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-z6f8t" event={"ID":"12e43fc8-ff00-42df-9091-307d6dc3e7d5","Type":"ContainerStarted","Data":"6e7f6cc13ae2c82fba341587a81338383f5c2f35e2cd7837a9653c161928fac1"} Jan 22 10:42:23 crc kubenswrapper[4752]: I0122 10:42:23.549173 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nrt6p" event={"ID":"61a33a59-edcf-44fe-97c6-2a397c69c87a","Type":"ContainerStarted","Data":"760d499232cdc65e422029a81e2e22b260c74f188b94d672bce3e61f92fe6156"} Jan 22 10:42:23 crc kubenswrapper[4752]: I0122 10:42:23.615423 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f9bf46d5-7xzb6" podStartSLOduration=10.615407232 podStartE2EDuration="10.615407232s" podCreationTimestamp="2026-01-22 10:42:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:42:23.610618226 +0000 UTC m=+1022.840561164" watchObservedRunningTime="2026-01-22 10:42:23.615407232 +0000 UTC m=+1022.845350140" Jan 22 10:42:23 crc kubenswrapper[4752]: I0122 10:42:23.624057 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 22 10:42:23 crc kubenswrapper[4752]: I0122 10:42:23.656577 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-nrt6p" podStartSLOduration=6.708932277 podStartE2EDuration="9.656558826s" podCreationTimestamp="2026-01-22 10:42:14 +0000 UTC" firstStartedPulling="2026-01-22 10:42:17.166920551 +0000 UTC m=+1016.396863459" lastFinishedPulling="2026-01-22 10:42:20.1145471 +0000 UTC m=+1019.344490008" observedRunningTime="2026-01-22 10:42:23.633447697 +0000 UTC m=+1022.863390615" watchObservedRunningTime="2026-01-22 10:42:23.656558826 +0000 UTC m=+1022.886501744" Jan 22 10:42:25 crc kubenswrapper[4752]: I0122 10:42:25.568846 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9356406a-3c6e-4af1-a8bb-92244286ba39","Type":"ContainerStarted","Data":"142644851c495509c91062f475336d914e85547dba46c81d6a1978054f164e2a"} Jan 22 10:42:27 crc kubenswrapper[4752]: I0122 10:42:27.726194 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:42:27 crc kubenswrapper[4752]: I0122 10:42:27.726847 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:42:28 crc kubenswrapper[4752]: I0122 10:42:28.603741 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-z6f8t" event={"ID":"12e43fc8-ff00-42df-9091-307d6dc3e7d5","Type":"ContainerStarted","Data":"4c55ab04c569e1dd1ced4772c69d5bf2d76e7fe6e0db102b028328d12babff21"} Jan 22 10:42:28 crc kubenswrapper[4752]: I0122 10:42:28.618610 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0f2558fc-5258-4a37-b46c-59ebc33503c6","Type":"ContainerStarted","Data":"ed998da6fe802d83b767b33f289c7692b54138d88504c5897474a17ae0e48eb6"} Jan 22 10:42:28 crc kubenswrapper[4752]: I0122 10:42:28.618652 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0f2558fc-5258-4a37-b46c-59ebc33503c6","Type":"ContainerStarted","Data":"e6c6c684a314328c4229dcc9d3440299934d05f597c5a43d63926628d7bbdb4d"} Jan 22 10:42:28 crc kubenswrapper[4752]: I0122 10:42:28.624989 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3321ba2b-722c-42c7-aa42-0325165b9437","Type":"ContainerStarted","Data":"8949a30e97b40b21774b8fc3fd42ad177da7c5e729a28f95f8fe53f084c0d274"} Jan 22 10:42:28 crc kubenswrapper[4752]: I0122 10:42:28.627098 4752 generic.go:334] "Generic (PLEG): container finished" podID="f64ed2cf-3432-43c8-a8cd-236c09d5adb3" containerID="8931603de063fe8a4149f608813e0cbf9e28671d7443e511a9928898453deafd" exitCode=0 Jan 22 10:42:28 crc kubenswrapper[4752]: I0122 10:42:28.627166 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f64ed2cf-3432-43c8-a8cd-236c09d5adb3","Type":"ContainerDied","Data":"8931603de063fe8a4149f608813e0cbf9e28671d7443e511a9928898453deafd"} Jan 22 10:42:28 crc kubenswrapper[4752]: I0122 10:42:28.629176 4752 generic.go:334] "Generic (PLEG): container finished" podID="06081432-1ce8-4002-8522-6e3472acb753" containerID="4dfe308993776e72d58d634a0bb55ceaabe0a06e5306509cc03d76ba3c3e734c" exitCode=0 Jan 22 10:42:28 crc kubenswrapper[4752]: I0122 10:42:28.629214 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"06081432-1ce8-4002-8522-6e3472acb753","Type":"ContainerDied","Data":"4dfe308993776e72d58d634a0bb55ceaabe0a06e5306509cc03d76ba3c3e734c"} Jan 22 10:42:28 crc kubenswrapper[4752]: I0122 10:42:28.642423 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-z6f8t" podStartSLOduration=24.153514207 podStartE2EDuration="32.642396216s" podCreationTimestamp="2026-01-22 10:41:56 +0000 UTC" firstStartedPulling="2026-01-22 10:42:05.270080558 +0000 UTC m=+1004.500023476" lastFinishedPulling="2026-01-22 10:42:13.758962577 +0000 UTC m=+1012.988905485" observedRunningTime="2026-01-22 10:42:28.640234909 +0000 UTC m=+1027.870177847" watchObservedRunningTime="2026-01-22 10:42:28.642396216 +0000 UTC m=+1027.872339164" Jan 22 10:42:28 crc kubenswrapper[4752]: I0122 10:42:28.792044 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=7.042498645 podStartE2EDuration="29.792026327s" podCreationTimestamp="2026-01-22 10:41:59 +0000 UTC" firstStartedPulling="2026-01-22 10:42:05.342774853 +0000 UTC m=+1004.572717761" lastFinishedPulling="2026-01-22 10:42:28.092302535 +0000 UTC m=+1027.322245443" observedRunningTime="2026-01-22 10:42:28.783438031 +0000 UTC m=+1028.013380939" watchObservedRunningTime="2026-01-22 10:42:28.792026327 +0000 UTC m=+1028.021969235" Jan 22 10:42:28 crc kubenswrapper[4752]: I0122 10:42:28.809208 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=11.234587007 podStartE2EDuration="32.809187369s" podCreationTimestamp="2026-01-22 10:41:56 +0000 UTC" firstStartedPulling="2026-01-22 10:42:06.12280062 +0000 UTC m=+1005.352743528" lastFinishedPulling="2026-01-22 10:42:27.697400982 +0000 UTC m=+1026.927343890" observedRunningTime="2026-01-22 10:42:28.80313618 +0000 UTC m=+1028.033079088" watchObservedRunningTime="2026-01-22 10:42:28.809187369 +0000 UTC m=+1028.039130277" Jan 22 10:42:28 crc kubenswrapper[4752]: I0122 10:42:28.837041 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f9bf46d5-7xzb6" Jan 22 10:42:28 crc kubenswrapper[4752]: I0122 10:42:28.891765 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-594b65fc49-fjdpq"] Jan 22 10:42:28 crc kubenswrapper[4752]: I0122 10:42:28.892005 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-594b65fc49-fjdpq" podUID="b07139f0-058d-4338-88f7-7e42a9aebeb6" containerName="dnsmasq-dns" containerID="cri-o://66f7de85fc6f57e1f62c974c7245937ed7202eaa35add55eb0dab5d14f726cfb" gracePeriod=10 Jan 22 10:42:29 crc kubenswrapper[4752]: I0122 10:42:29.291508 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594b65fc49-fjdpq" Jan 22 10:42:29 crc kubenswrapper[4752]: I0122 10:42:29.368588 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b07139f0-058d-4338-88f7-7e42a9aebeb6-config\") pod \"b07139f0-058d-4338-88f7-7e42a9aebeb6\" (UID: \"b07139f0-058d-4338-88f7-7e42a9aebeb6\") " Jan 22 10:42:29 crc kubenswrapper[4752]: I0122 10:42:29.368665 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b07139f0-058d-4338-88f7-7e42a9aebeb6-dns-svc\") pod \"b07139f0-058d-4338-88f7-7e42a9aebeb6\" (UID: \"b07139f0-058d-4338-88f7-7e42a9aebeb6\") " Jan 22 10:42:29 crc kubenswrapper[4752]: I0122 10:42:29.368793 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvwbc\" (UniqueName: \"kubernetes.io/projected/b07139f0-058d-4338-88f7-7e42a9aebeb6-kube-api-access-jvwbc\") pod \"b07139f0-058d-4338-88f7-7e42a9aebeb6\" (UID: \"b07139f0-058d-4338-88f7-7e42a9aebeb6\") " Jan 22 10:42:29 crc kubenswrapper[4752]: I0122 10:42:29.373480 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b07139f0-058d-4338-88f7-7e42a9aebeb6-kube-api-access-jvwbc" (OuterVolumeSpecName: "kube-api-access-jvwbc") pod "b07139f0-058d-4338-88f7-7e42a9aebeb6" (UID: "b07139f0-058d-4338-88f7-7e42a9aebeb6"). InnerVolumeSpecName "kube-api-access-jvwbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:42:29 crc kubenswrapper[4752]: I0122 10:42:29.406054 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b07139f0-058d-4338-88f7-7e42a9aebeb6-config" (OuterVolumeSpecName: "config") pod "b07139f0-058d-4338-88f7-7e42a9aebeb6" (UID: "b07139f0-058d-4338-88f7-7e42a9aebeb6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:42:29 crc kubenswrapper[4752]: I0122 10:42:29.421896 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b07139f0-058d-4338-88f7-7e42a9aebeb6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b07139f0-058d-4338-88f7-7e42a9aebeb6" (UID: "b07139f0-058d-4338-88f7-7e42a9aebeb6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:42:29 crc kubenswrapper[4752]: I0122 10:42:29.470387 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvwbc\" (UniqueName: \"kubernetes.io/projected/b07139f0-058d-4338-88f7-7e42a9aebeb6-kube-api-access-jvwbc\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:29 crc kubenswrapper[4752]: I0122 10:42:29.470435 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b07139f0-058d-4338-88f7-7e42a9aebeb6-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:29 crc kubenswrapper[4752]: I0122 10:42:29.470450 4752 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b07139f0-058d-4338-88f7-7e42a9aebeb6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:29 crc kubenswrapper[4752]: I0122 10:42:29.639570 4752 generic.go:334] "Generic (PLEG): container finished" podID="b07139f0-058d-4338-88f7-7e42a9aebeb6" containerID="66f7de85fc6f57e1f62c974c7245937ed7202eaa35add55eb0dab5d14f726cfb" exitCode=0 Jan 22 10:42:29 crc kubenswrapper[4752]: I0122 10:42:29.639655 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594b65fc49-fjdpq" event={"ID":"b07139f0-058d-4338-88f7-7e42a9aebeb6","Type":"ContainerDied","Data":"66f7de85fc6f57e1f62c974c7245937ed7202eaa35add55eb0dab5d14f726cfb"} Jan 22 10:42:29 crc kubenswrapper[4752]: I0122 10:42:29.639690 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594b65fc49-fjdpq" Jan 22 10:42:29 crc kubenswrapper[4752]: I0122 10:42:29.639715 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594b65fc49-fjdpq" event={"ID":"b07139f0-058d-4338-88f7-7e42a9aebeb6","Type":"ContainerDied","Data":"cf4d26945648867149d23c0c882b3aa4efc7e429915113929915a7f52c91fd1a"} Jan 22 10:42:29 crc kubenswrapper[4752]: I0122 10:42:29.639741 4752 scope.go:117] "RemoveContainer" containerID="66f7de85fc6f57e1f62c974c7245937ed7202eaa35add55eb0dab5d14f726cfb" Jan 22 10:42:29 crc kubenswrapper[4752]: I0122 10:42:29.642195 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"06081432-1ce8-4002-8522-6e3472acb753","Type":"ContainerStarted","Data":"5818898aeb6143272e657044ebfbdbe2756e4ad51104f860968dc9e0a5f3abe6"} Jan 22 10:42:29 crc kubenswrapper[4752]: I0122 10:42:29.644107 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f64ed2cf-3432-43c8-a8cd-236c09d5adb3","Type":"ContainerStarted","Data":"1a0f9377835f5d39746aa0af062c33569fb3fe3a4a8691e71cc155bcca3af069"} Jan 22 10:42:29 crc kubenswrapper[4752]: I0122 10:42:29.644684 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-z6f8t" Jan 22 10:42:29 crc kubenswrapper[4752]: I0122 10:42:29.644726 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-z6f8t" Jan 22 10:42:29 crc kubenswrapper[4752]: I0122 10:42:29.684554 4752 scope.go:117] "RemoveContainer" containerID="77612c7262246be992ae822a90208e0010be05dc6f651f17c9a7eb55d3728cb8" Jan 22 10:42:29 crc kubenswrapper[4752]: I0122 10:42:29.696658 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=33.086052772 podStartE2EDuration="41.696640957s" podCreationTimestamp="2026-01-22 10:41:48 +0000 UTC" firstStartedPulling="2026-01-22 10:42:05.245818009 +0000 UTC m=+1004.475760917" lastFinishedPulling="2026-01-22 10:42:13.856406194 +0000 UTC m=+1013.086349102" observedRunningTime="2026-01-22 10:42:29.693282129 +0000 UTC m=+1028.923225037" watchObservedRunningTime="2026-01-22 10:42:29.696640957 +0000 UTC m=+1028.926583865" Jan 22 10:42:29 crc kubenswrapper[4752]: I0122 10:42:29.699468 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=30.910895176 podStartE2EDuration="40.699457552s" podCreationTimestamp="2026-01-22 10:41:49 +0000 UTC" firstStartedPulling="2026-01-22 10:42:05.260081275 +0000 UTC m=+1004.490024183" lastFinishedPulling="2026-01-22 10:42:15.048643641 +0000 UTC m=+1014.278586559" observedRunningTime="2026-01-22 10:42:29.67320401 +0000 UTC m=+1028.903146918" watchObservedRunningTime="2026-01-22 10:42:29.699457552 +0000 UTC m=+1028.929400460" Jan 22 10:42:29 crc kubenswrapper[4752]: I0122 10:42:29.702988 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 22 10:42:29 crc kubenswrapper[4752]: I0122 10:42:29.703142 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 22 10:42:29 crc kubenswrapper[4752]: I0122 10:42:29.711284 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-594b65fc49-fjdpq"] Jan 22 10:42:29 crc kubenswrapper[4752]: I0122 10:42:29.713526 4752 scope.go:117] "RemoveContainer" containerID="66f7de85fc6f57e1f62c974c7245937ed7202eaa35add55eb0dab5d14f726cfb" Jan 22 10:42:29 crc kubenswrapper[4752]: E0122 10:42:29.716277 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66f7de85fc6f57e1f62c974c7245937ed7202eaa35add55eb0dab5d14f726cfb\": container with ID starting with 66f7de85fc6f57e1f62c974c7245937ed7202eaa35add55eb0dab5d14f726cfb not found: ID does not exist" containerID="66f7de85fc6f57e1f62c974c7245937ed7202eaa35add55eb0dab5d14f726cfb" Jan 22 10:42:29 crc kubenswrapper[4752]: I0122 10:42:29.716326 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66f7de85fc6f57e1f62c974c7245937ed7202eaa35add55eb0dab5d14f726cfb"} err="failed to get container status \"66f7de85fc6f57e1f62c974c7245937ed7202eaa35add55eb0dab5d14f726cfb\": rpc error: code = NotFound desc = could not find container \"66f7de85fc6f57e1f62c974c7245937ed7202eaa35add55eb0dab5d14f726cfb\": container with ID starting with 66f7de85fc6f57e1f62c974c7245937ed7202eaa35add55eb0dab5d14f726cfb not found: ID does not exist" Jan 22 10:42:29 crc kubenswrapper[4752]: I0122 10:42:29.716361 4752 scope.go:117] "RemoveContainer" containerID="77612c7262246be992ae822a90208e0010be05dc6f651f17c9a7eb55d3728cb8" Jan 22 10:42:29 crc kubenswrapper[4752]: E0122 10:42:29.716680 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77612c7262246be992ae822a90208e0010be05dc6f651f17c9a7eb55d3728cb8\": container with ID starting with 77612c7262246be992ae822a90208e0010be05dc6f651f17c9a7eb55d3728cb8 not found: ID does not exist" containerID="77612c7262246be992ae822a90208e0010be05dc6f651f17c9a7eb55d3728cb8" Jan 22 10:42:29 crc kubenswrapper[4752]: I0122 10:42:29.716723 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77612c7262246be992ae822a90208e0010be05dc6f651f17c9a7eb55d3728cb8"} err="failed to get container status \"77612c7262246be992ae822a90208e0010be05dc6f651f17c9a7eb55d3728cb8\": rpc error: code = NotFound desc = could not find container \"77612c7262246be992ae822a90208e0010be05dc6f651f17c9a7eb55d3728cb8\": container with ID starting with 77612c7262246be992ae822a90208e0010be05dc6f651f17c9a7eb55d3728cb8 not found: ID does not exist" Jan 22 10:42:29 crc kubenswrapper[4752]: I0122 10:42:29.718324 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-594b65fc49-fjdpq"] Jan 22 10:42:30 crc kubenswrapper[4752]: I0122 10:42:30.588641 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7b96c9ad-917b-4891-a39f-3f19c92bdd30-etc-swift\") pod \"swift-storage-0\" (UID: \"7b96c9ad-917b-4891-a39f-3f19c92bdd30\") " pod="openstack/swift-storage-0" Jan 22 10:42:30 crc kubenswrapper[4752]: E0122 10:42:30.588850 4752 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 10:42:30 crc kubenswrapper[4752]: E0122 10:42:30.589171 4752 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 10:42:30 crc kubenswrapper[4752]: E0122 10:42:30.589251 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b96c9ad-917b-4891-a39f-3f19c92bdd30-etc-swift podName:7b96c9ad-917b-4891-a39f-3f19c92bdd30 nodeName:}" failed. No retries permitted until 2026-01-22 10:42:46.589226491 +0000 UTC m=+1045.819169439 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7b96c9ad-917b-4891-a39f-3f19c92bdd30-etc-swift") pod "swift-storage-0" (UID: "7b96c9ad-917b-4891-a39f-3f19c92bdd30") : configmap "swift-ring-files" not found Jan 22 10:42:30 crc kubenswrapper[4752]: I0122 10:42:30.657569 4752 generic.go:334] "Generic (PLEG): container finished" podID="9750781f-e5d3-4106-ac9e-431b017df583" containerID="2b5c2d1fca13358092182d00f7908299fdf8e03ff7a5c1c1f7fca3e7d84a462a" exitCode=0 Jan 22 10:42:30 crc kubenswrapper[4752]: I0122 10:42:30.658412 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9750781f-e5d3-4106-ac9e-431b017df583","Type":"ContainerDied","Data":"2b5c2d1fca13358092182d00f7908299fdf8e03ff7a5c1c1f7fca3e7d84a462a"} Jan 22 10:42:30 crc kubenswrapper[4752]: I0122 10:42:30.697350 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 22 10:42:30 crc kubenswrapper[4752]: I0122 10:42:30.744399 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 22 10:42:31 crc kubenswrapper[4752]: I0122 10:42:31.108980 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b07139f0-058d-4338-88f7-7e42a9aebeb6" path="/var/lib/kubelet/pods/b07139f0-058d-4338-88f7-7e42a9aebeb6/volumes" Jan 22 10:42:31 crc kubenswrapper[4752]: I0122 10:42:31.131221 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 22 10:42:31 crc kubenswrapper[4752]: I0122 10:42:31.131265 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 22 10:42:31 crc kubenswrapper[4752]: I0122 10:42:31.208149 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 22 10:42:31 crc kubenswrapper[4752]: I0122 10:42:31.208218 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 22 10:42:31 crc kubenswrapper[4752]: I0122 10:42:31.249616 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 22 10:42:31 crc kubenswrapper[4752]: I0122 10:42:31.685072 4752 generic.go:334] "Generic (PLEG): container finished" podID="61a33a59-edcf-44fe-97c6-2a397c69c87a" containerID="760d499232cdc65e422029a81e2e22b260c74f188b94d672bce3e61f92fe6156" exitCode=0 Jan 22 10:42:31 crc kubenswrapper[4752]: I0122 10:42:31.685910 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nrt6p" event={"ID":"61a33a59-edcf-44fe-97c6-2a397c69c87a","Type":"ContainerDied","Data":"760d499232cdc65e422029a81e2e22b260c74f188b94d672bce3e61f92fe6156"} Jan 22 10:42:31 crc kubenswrapper[4752]: I0122 10:42:31.686188 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 22 10:42:31 crc kubenswrapper[4752]: I0122 10:42:31.844661 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.106041 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58d8fd8d89-rrmt8"] Jan 22 10:42:32 crc kubenswrapper[4752]: E0122 10:42:32.106477 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b07139f0-058d-4338-88f7-7e42a9aebeb6" containerName="init" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.106500 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="b07139f0-058d-4338-88f7-7e42a9aebeb6" containerName="init" Jan 22 10:42:32 crc kubenswrapper[4752]: E0122 10:42:32.106527 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b07139f0-058d-4338-88f7-7e42a9aebeb6" containerName="dnsmasq-dns" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.106534 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="b07139f0-058d-4338-88f7-7e42a9aebeb6" containerName="dnsmasq-dns" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.106698 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="b07139f0-058d-4338-88f7-7e42a9aebeb6" containerName="dnsmasq-dns" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.107584 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58d8fd8d89-rrmt8" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.109230 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.120066 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58d8fd8d89-rrmt8"] Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.148487 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-6tj9z"] Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.149539 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-6tj9z" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.154191 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.169650 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-6tj9z"] Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.226743 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pxd2\" (UniqueName: \"kubernetes.io/projected/789beb82-b859-4f18-9f1b-76e0e4beebe7-kube-api-access-5pxd2\") pod \"dnsmasq-dns-58d8fd8d89-rrmt8\" (UID: \"789beb82-b859-4f18-9f1b-76e0e4beebe7\") " pod="openstack/dnsmasq-dns-58d8fd8d89-rrmt8" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.226807 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c908e16a-1811-4df8-a77e-78da83dde73c-ovn-rundir\") pod \"ovn-controller-metrics-6tj9z\" (UID: \"c908e16a-1811-4df8-a77e-78da83dde73c\") " pod="openstack/ovn-controller-metrics-6tj9z" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.226892 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/789beb82-b859-4f18-9f1b-76e0e4beebe7-dns-svc\") pod \"dnsmasq-dns-58d8fd8d89-rrmt8\" (UID: \"789beb82-b859-4f18-9f1b-76e0e4beebe7\") " pod="openstack/dnsmasq-dns-58d8fd8d89-rrmt8" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.226914 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c908e16a-1811-4df8-a77e-78da83dde73c-ovs-rundir\") pod \"ovn-controller-metrics-6tj9z\" (UID: \"c908e16a-1811-4df8-a77e-78da83dde73c\") " pod="openstack/ovn-controller-metrics-6tj9z" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.226935 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c908e16a-1811-4df8-a77e-78da83dde73c-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6tj9z\" (UID: \"c908e16a-1811-4df8-a77e-78da83dde73c\") " pod="openstack/ovn-controller-metrics-6tj9z" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.226957 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c908e16a-1811-4df8-a77e-78da83dde73c-config\") pod \"ovn-controller-metrics-6tj9z\" (UID: \"c908e16a-1811-4df8-a77e-78da83dde73c\") " pod="openstack/ovn-controller-metrics-6tj9z" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.226977 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/789beb82-b859-4f18-9f1b-76e0e4beebe7-config\") pod \"dnsmasq-dns-58d8fd8d89-rrmt8\" (UID: \"789beb82-b859-4f18-9f1b-76e0e4beebe7\") " pod="openstack/dnsmasq-dns-58d8fd8d89-rrmt8" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.227015 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntnd6\" (UniqueName: \"kubernetes.io/projected/c908e16a-1811-4df8-a77e-78da83dde73c-kube-api-access-ntnd6\") pod \"ovn-controller-metrics-6tj9z\" (UID: \"c908e16a-1811-4df8-a77e-78da83dde73c\") " pod="openstack/ovn-controller-metrics-6tj9z" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.227034 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c908e16a-1811-4df8-a77e-78da83dde73c-combined-ca-bundle\") pod \"ovn-controller-metrics-6tj9z\" (UID: \"c908e16a-1811-4df8-a77e-78da83dde73c\") " pod="openstack/ovn-controller-metrics-6tj9z" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.227052 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/789beb82-b859-4f18-9f1b-76e0e4beebe7-ovsdbserver-nb\") pod \"dnsmasq-dns-58d8fd8d89-rrmt8\" (UID: \"789beb82-b859-4f18-9f1b-76e0e4beebe7\") " pod="openstack/dnsmasq-dns-58d8fd8d89-rrmt8" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.328310 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pxd2\" (UniqueName: \"kubernetes.io/projected/789beb82-b859-4f18-9f1b-76e0e4beebe7-kube-api-access-5pxd2\") pod \"dnsmasq-dns-58d8fd8d89-rrmt8\" (UID: \"789beb82-b859-4f18-9f1b-76e0e4beebe7\") " pod="openstack/dnsmasq-dns-58d8fd8d89-rrmt8" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.328386 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c908e16a-1811-4df8-a77e-78da83dde73c-ovn-rundir\") pod \"ovn-controller-metrics-6tj9z\" (UID: \"c908e16a-1811-4df8-a77e-78da83dde73c\") " pod="openstack/ovn-controller-metrics-6tj9z" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.328442 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/789beb82-b859-4f18-9f1b-76e0e4beebe7-dns-svc\") pod \"dnsmasq-dns-58d8fd8d89-rrmt8\" (UID: \"789beb82-b859-4f18-9f1b-76e0e4beebe7\") " pod="openstack/dnsmasq-dns-58d8fd8d89-rrmt8" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.328466 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c908e16a-1811-4df8-a77e-78da83dde73c-ovs-rundir\") pod \"ovn-controller-metrics-6tj9z\" (UID: \"c908e16a-1811-4df8-a77e-78da83dde73c\") " pod="openstack/ovn-controller-metrics-6tj9z" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.328496 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c908e16a-1811-4df8-a77e-78da83dde73c-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6tj9z\" (UID: \"c908e16a-1811-4df8-a77e-78da83dde73c\") " pod="openstack/ovn-controller-metrics-6tj9z" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.328526 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c908e16a-1811-4df8-a77e-78da83dde73c-config\") pod \"ovn-controller-metrics-6tj9z\" (UID: \"c908e16a-1811-4df8-a77e-78da83dde73c\") " pod="openstack/ovn-controller-metrics-6tj9z" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.328552 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/789beb82-b859-4f18-9f1b-76e0e4beebe7-config\") pod \"dnsmasq-dns-58d8fd8d89-rrmt8\" (UID: \"789beb82-b859-4f18-9f1b-76e0e4beebe7\") " pod="openstack/dnsmasq-dns-58d8fd8d89-rrmt8" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.328595 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntnd6\" (UniqueName: \"kubernetes.io/projected/c908e16a-1811-4df8-a77e-78da83dde73c-kube-api-access-ntnd6\") pod \"ovn-controller-metrics-6tj9z\" (UID: \"c908e16a-1811-4df8-a77e-78da83dde73c\") " pod="openstack/ovn-controller-metrics-6tj9z" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.328628 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c908e16a-1811-4df8-a77e-78da83dde73c-combined-ca-bundle\") pod \"ovn-controller-metrics-6tj9z\" (UID: \"c908e16a-1811-4df8-a77e-78da83dde73c\") " pod="openstack/ovn-controller-metrics-6tj9z" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.328649 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/789beb82-b859-4f18-9f1b-76e0e4beebe7-ovsdbserver-nb\") pod \"dnsmasq-dns-58d8fd8d89-rrmt8\" (UID: \"789beb82-b859-4f18-9f1b-76e0e4beebe7\") " pod="openstack/dnsmasq-dns-58d8fd8d89-rrmt8" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.328818 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c908e16a-1811-4df8-a77e-78da83dde73c-ovs-rundir\") pod \"ovn-controller-metrics-6tj9z\" (UID: \"c908e16a-1811-4df8-a77e-78da83dde73c\") " pod="openstack/ovn-controller-metrics-6tj9z" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.329187 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c908e16a-1811-4df8-a77e-78da83dde73c-ovn-rundir\") pod \"ovn-controller-metrics-6tj9z\" (UID: \"c908e16a-1811-4df8-a77e-78da83dde73c\") " pod="openstack/ovn-controller-metrics-6tj9z" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.329370 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c908e16a-1811-4df8-a77e-78da83dde73c-config\") pod \"ovn-controller-metrics-6tj9z\" (UID: \"c908e16a-1811-4df8-a77e-78da83dde73c\") " pod="openstack/ovn-controller-metrics-6tj9z" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.329461 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/789beb82-b859-4f18-9f1b-76e0e4beebe7-dns-svc\") pod \"dnsmasq-dns-58d8fd8d89-rrmt8\" (UID: \"789beb82-b859-4f18-9f1b-76e0e4beebe7\") " pod="openstack/dnsmasq-dns-58d8fd8d89-rrmt8" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.329481 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/789beb82-b859-4f18-9f1b-76e0e4beebe7-config\") pod \"dnsmasq-dns-58d8fd8d89-rrmt8\" (UID: \"789beb82-b859-4f18-9f1b-76e0e4beebe7\") " pod="openstack/dnsmasq-dns-58d8fd8d89-rrmt8" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.329664 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/789beb82-b859-4f18-9f1b-76e0e4beebe7-ovsdbserver-nb\") pod \"dnsmasq-dns-58d8fd8d89-rrmt8\" (UID: \"789beb82-b859-4f18-9f1b-76e0e4beebe7\") " pod="openstack/dnsmasq-dns-58d8fd8d89-rrmt8" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.337381 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c908e16a-1811-4df8-a77e-78da83dde73c-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6tj9z\" (UID: \"c908e16a-1811-4df8-a77e-78da83dde73c\") " pod="openstack/ovn-controller-metrics-6tj9z" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.338031 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c908e16a-1811-4df8-a77e-78da83dde73c-combined-ca-bundle\") pod \"ovn-controller-metrics-6tj9z\" (UID: \"c908e16a-1811-4df8-a77e-78da83dde73c\") " pod="openstack/ovn-controller-metrics-6tj9z" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.345111 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntnd6\" (UniqueName: \"kubernetes.io/projected/c908e16a-1811-4df8-a77e-78da83dde73c-kube-api-access-ntnd6\") pod \"ovn-controller-metrics-6tj9z\" (UID: \"c908e16a-1811-4df8-a77e-78da83dde73c\") " pod="openstack/ovn-controller-metrics-6tj9z" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.353744 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pxd2\" (UniqueName: \"kubernetes.io/projected/789beb82-b859-4f18-9f1b-76e0e4beebe7-kube-api-access-5pxd2\") pod \"dnsmasq-dns-58d8fd8d89-rrmt8\" (UID: \"789beb82-b859-4f18-9f1b-76e0e4beebe7\") " pod="openstack/dnsmasq-dns-58d8fd8d89-rrmt8" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.423145 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58d8fd8d89-rrmt8" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.471300 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-6tj9z" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.529730 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58d8fd8d89-rrmt8"] Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.568170 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55ddfd5dfc-4mjcj"] Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.571338 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55ddfd5dfc-4mjcj" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.574845 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.580722 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55ddfd5dfc-4mjcj"] Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.738307 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e-ovsdbserver-nb\") pod \"dnsmasq-dns-55ddfd5dfc-4mjcj\" (UID: \"8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e\") " pod="openstack/dnsmasq-dns-55ddfd5dfc-4mjcj" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.738719 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e-ovsdbserver-sb\") pod \"dnsmasq-dns-55ddfd5dfc-4mjcj\" (UID: \"8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e\") " pod="openstack/dnsmasq-dns-55ddfd5dfc-4mjcj" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.738760 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e-config\") pod \"dnsmasq-dns-55ddfd5dfc-4mjcj\" (UID: \"8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e\") " pod="openstack/dnsmasq-dns-55ddfd5dfc-4mjcj" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.738913 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e-dns-svc\") pod \"dnsmasq-dns-55ddfd5dfc-4mjcj\" (UID: \"8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e\") " pod="openstack/dnsmasq-dns-55ddfd5dfc-4mjcj" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.739013 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm6b9\" (UniqueName: \"kubernetes.io/projected/8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e-kube-api-access-dm6b9\") pod \"dnsmasq-dns-55ddfd5dfc-4mjcj\" (UID: \"8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e\") " pod="openstack/dnsmasq-dns-55ddfd5dfc-4mjcj" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.841021 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e-dns-svc\") pod \"dnsmasq-dns-55ddfd5dfc-4mjcj\" (UID: \"8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e\") " pod="openstack/dnsmasq-dns-55ddfd5dfc-4mjcj" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.841133 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm6b9\" (UniqueName: \"kubernetes.io/projected/8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e-kube-api-access-dm6b9\") pod \"dnsmasq-dns-55ddfd5dfc-4mjcj\" (UID: \"8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e\") " pod="openstack/dnsmasq-dns-55ddfd5dfc-4mjcj" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.841211 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e-ovsdbserver-nb\") pod \"dnsmasq-dns-55ddfd5dfc-4mjcj\" (UID: \"8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e\") " pod="openstack/dnsmasq-dns-55ddfd5dfc-4mjcj" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.841226 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e-ovsdbserver-sb\") pod \"dnsmasq-dns-55ddfd5dfc-4mjcj\" (UID: \"8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e\") " pod="openstack/dnsmasq-dns-55ddfd5dfc-4mjcj" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.841263 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e-config\") pod \"dnsmasq-dns-55ddfd5dfc-4mjcj\" (UID: \"8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e\") " pod="openstack/dnsmasq-dns-55ddfd5dfc-4mjcj" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.842829 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e-ovsdbserver-sb\") pod \"dnsmasq-dns-55ddfd5dfc-4mjcj\" (UID: \"8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e\") " pod="openstack/dnsmasq-dns-55ddfd5dfc-4mjcj" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.842875 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e-ovsdbserver-nb\") pod \"dnsmasq-dns-55ddfd5dfc-4mjcj\" (UID: \"8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e\") " pod="openstack/dnsmasq-dns-55ddfd5dfc-4mjcj" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.842835 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e-dns-svc\") pod \"dnsmasq-dns-55ddfd5dfc-4mjcj\" (UID: \"8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e\") " pod="openstack/dnsmasq-dns-55ddfd5dfc-4mjcj" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.843626 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e-config\") pod \"dnsmasq-dns-55ddfd5dfc-4mjcj\" (UID: \"8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e\") " pod="openstack/dnsmasq-dns-55ddfd5dfc-4mjcj" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.868855 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm6b9\" (UniqueName: \"kubernetes.io/projected/8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e-kube-api-access-dm6b9\") pod \"dnsmasq-dns-55ddfd5dfc-4mjcj\" (UID: \"8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e\") " pod="openstack/dnsmasq-dns-55ddfd5dfc-4mjcj" Jan 22 10:42:32 crc kubenswrapper[4752]: I0122 10:42:32.894796 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55ddfd5dfc-4mjcj" Jan 22 10:42:33 crc kubenswrapper[4752]: I0122 10:42:33.008620 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58d8fd8d89-rrmt8"] Jan 22 10:42:33 crc kubenswrapper[4752]: W0122 10:42:33.014156 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod789beb82_b859_4f18_9f1b_76e0e4beebe7.slice/crio-73113fe238995db2a4525288279a1976e2563beb425a10d6bd3227af92e1de09 WatchSource:0}: Error finding container 73113fe238995db2a4525288279a1976e2563beb425a10d6bd3227af92e1de09: Status 404 returned error can't find the container with id 73113fe238995db2a4525288279a1976e2563beb425a10d6bd3227af92e1de09 Jan 22 10:42:33 crc kubenswrapper[4752]: I0122 10:42:33.049919 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nrt6p" Jan 22 10:42:33 crc kubenswrapper[4752]: I0122 10:42:33.090453 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-6tj9z"] Jan 22 10:42:33 crc kubenswrapper[4752]: I0122 10:42:33.145593 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61a33a59-edcf-44fe-97c6-2a397c69c87a-combined-ca-bundle\") pod \"61a33a59-edcf-44fe-97c6-2a397c69c87a\" (UID: \"61a33a59-edcf-44fe-97c6-2a397c69c87a\") " Jan 22 10:42:33 crc kubenswrapper[4752]: I0122 10:42:33.145699 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/61a33a59-edcf-44fe-97c6-2a397c69c87a-swiftconf\") pod \"61a33a59-edcf-44fe-97c6-2a397c69c87a\" (UID: \"61a33a59-edcf-44fe-97c6-2a397c69c87a\") " Jan 22 10:42:33 crc kubenswrapper[4752]: I0122 10:42:33.145739 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/61a33a59-edcf-44fe-97c6-2a397c69c87a-dispersionconf\") pod \"61a33a59-edcf-44fe-97c6-2a397c69c87a\" (UID: \"61a33a59-edcf-44fe-97c6-2a397c69c87a\") " Jan 22 10:42:33 crc kubenswrapper[4752]: I0122 10:42:33.145826 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rkph\" (UniqueName: \"kubernetes.io/projected/61a33a59-edcf-44fe-97c6-2a397c69c87a-kube-api-access-4rkph\") pod \"61a33a59-edcf-44fe-97c6-2a397c69c87a\" (UID: \"61a33a59-edcf-44fe-97c6-2a397c69c87a\") " Jan 22 10:42:33 crc kubenswrapper[4752]: I0122 10:42:33.145982 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61a33a59-edcf-44fe-97c6-2a397c69c87a-scripts\") pod \"61a33a59-edcf-44fe-97c6-2a397c69c87a\" (UID: \"61a33a59-edcf-44fe-97c6-2a397c69c87a\") " Jan 22 10:42:33 crc kubenswrapper[4752]: I0122 10:42:33.146045 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/61a33a59-edcf-44fe-97c6-2a397c69c87a-etc-swift\") pod \"61a33a59-edcf-44fe-97c6-2a397c69c87a\" (UID: \"61a33a59-edcf-44fe-97c6-2a397c69c87a\") " Jan 22 10:42:33 crc kubenswrapper[4752]: I0122 10:42:33.146111 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/61a33a59-edcf-44fe-97c6-2a397c69c87a-ring-data-devices\") pod \"61a33a59-edcf-44fe-97c6-2a397c69c87a\" (UID: \"61a33a59-edcf-44fe-97c6-2a397c69c87a\") " Jan 22 10:42:33 crc kubenswrapper[4752]: I0122 10:42:33.147639 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61a33a59-edcf-44fe-97c6-2a397c69c87a-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "61a33a59-edcf-44fe-97c6-2a397c69c87a" (UID: "61a33a59-edcf-44fe-97c6-2a397c69c87a"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:42:33 crc kubenswrapper[4752]: I0122 10:42:33.149471 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61a33a59-edcf-44fe-97c6-2a397c69c87a-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "61a33a59-edcf-44fe-97c6-2a397c69c87a" (UID: "61a33a59-edcf-44fe-97c6-2a397c69c87a"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:42:33 crc kubenswrapper[4752]: I0122 10:42:33.151435 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61a33a59-edcf-44fe-97c6-2a397c69c87a-kube-api-access-4rkph" (OuterVolumeSpecName: "kube-api-access-4rkph") pod "61a33a59-edcf-44fe-97c6-2a397c69c87a" (UID: "61a33a59-edcf-44fe-97c6-2a397c69c87a"). InnerVolumeSpecName "kube-api-access-4rkph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:42:33 crc kubenswrapper[4752]: I0122 10:42:33.170439 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61a33a59-edcf-44fe-97c6-2a397c69c87a-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "61a33a59-edcf-44fe-97c6-2a397c69c87a" (UID: "61a33a59-edcf-44fe-97c6-2a397c69c87a"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:42:33 crc kubenswrapper[4752]: I0122 10:42:33.174652 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61a33a59-edcf-44fe-97c6-2a397c69c87a-scripts" (OuterVolumeSpecName: "scripts") pod "61a33a59-edcf-44fe-97c6-2a397c69c87a" (UID: "61a33a59-edcf-44fe-97c6-2a397c69c87a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:42:33 crc kubenswrapper[4752]: I0122 10:42:33.189062 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61a33a59-edcf-44fe-97c6-2a397c69c87a-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "61a33a59-edcf-44fe-97c6-2a397c69c87a" (UID: "61a33a59-edcf-44fe-97c6-2a397c69c87a"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:42:33 crc kubenswrapper[4752]: I0122 10:42:33.194540 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61a33a59-edcf-44fe-97c6-2a397c69c87a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61a33a59-edcf-44fe-97c6-2a397c69c87a" (UID: "61a33a59-edcf-44fe-97c6-2a397c69c87a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:42:33 crc kubenswrapper[4752]: W0122 10:42:33.197998 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ba24cc8_dd8f_48b1_8fa9_cfc8306bfc5e.slice/crio-6dd5dd4b89f8c719edcae3d9da89e02b3c2f71e555d43d7bfc306414757f18a7 WatchSource:0}: Error finding container 6dd5dd4b89f8c719edcae3d9da89e02b3c2f71e555d43d7bfc306414757f18a7: Status 404 returned error can't find the container with id 6dd5dd4b89f8c719edcae3d9da89e02b3c2f71e555d43d7bfc306414757f18a7 Jan 22 10:42:33 crc kubenswrapper[4752]: I0122 10:42:33.200362 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55ddfd5dfc-4mjcj"] Jan 22 10:42:33 crc kubenswrapper[4752]: I0122 10:42:33.248961 4752 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61a33a59-edcf-44fe-97c6-2a397c69c87a-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:33 crc kubenswrapper[4752]: I0122 10:42:33.249826 4752 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/61a33a59-edcf-44fe-97c6-2a397c69c87a-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:33 crc kubenswrapper[4752]: I0122 10:42:33.249847 4752 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/61a33a59-edcf-44fe-97c6-2a397c69c87a-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:33 crc kubenswrapper[4752]: I0122 10:42:33.249886 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61a33a59-edcf-44fe-97c6-2a397c69c87a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:33 crc kubenswrapper[4752]: I0122 10:42:33.249899 4752 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/61a33a59-edcf-44fe-97c6-2a397c69c87a-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:33 crc kubenswrapper[4752]: I0122 10:42:33.249910 4752 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/61a33a59-edcf-44fe-97c6-2a397c69c87a-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:33 crc kubenswrapper[4752]: I0122 10:42:33.249923 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rkph\" (UniqueName: \"kubernetes.io/projected/61a33a59-edcf-44fe-97c6-2a397c69c87a-kube-api-access-4rkph\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:33 crc kubenswrapper[4752]: I0122 10:42:33.704370 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nrt6p" event={"ID":"61a33a59-edcf-44fe-97c6-2a397c69c87a","Type":"ContainerDied","Data":"103a8a9614f957cec9549467b949e6fd556471fb3dbd5c6684509aa93453ac1c"} Jan 22 10:42:33 crc kubenswrapper[4752]: I0122 10:42:33.704421 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="103a8a9614f957cec9549467b949e6fd556471fb3dbd5c6684509aa93453ac1c" Jan 22 10:42:33 crc kubenswrapper[4752]: I0122 10:42:33.704441 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nrt6p" Jan 22 10:42:33 crc kubenswrapper[4752]: I0122 10:42:33.705656 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-6tj9z" event={"ID":"c908e16a-1811-4df8-a77e-78da83dde73c","Type":"ContainerStarted","Data":"c06543fb778f8b74a1ff597b99025bcabf7515328d4aca8db0230d76542ee62b"} Jan 22 10:42:33 crc kubenswrapper[4752]: I0122 10:42:33.707283 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55ddfd5dfc-4mjcj" event={"ID":"8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e","Type":"ContainerStarted","Data":"6dd5dd4b89f8c719edcae3d9da89e02b3c2f71e555d43d7bfc306414757f18a7"} Jan 22 10:42:33 crc kubenswrapper[4752]: I0122 10:42:33.709571 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58d8fd8d89-rrmt8" event={"ID":"789beb82-b859-4f18-9f1b-76e0e4beebe7","Type":"ContainerStarted","Data":"73113fe238995db2a4525288279a1976e2563beb425a10d6bd3227af92e1de09"} Jan 22 10:42:34 crc kubenswrapper[4752]: I0122 10:42:34.716494 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-6tj9z" event={"ID":"c908e16a-1811-4df8-a77e-78da83dde73c","Type":"ContainerStarted","Data":"af2c52aaec39c41f7de9bfe3c85d58895af3fe1d2be54bfe64667b0ce0b81ea0"} Jan 22 10:42:34 crc kubenswrapper[4752]: I0122 10:42:34.718763 4752 generic.go:334] "Generic (PLEG): container finished" podID="8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e" containerID="1b3fbde4742ba245b889260a74cef0e89c0d02fe422d5c57a1e692a0d68e4bc0" exitCode=0 Jan 22 10:42:34 crc kubenswrapper[4752]: I0122 10:42:34.718802 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55ddfd5dfc-4mjcj" event={"ID":"8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e","Type":"ContainerDied","Data":"1b3fbde4742ba245b889260a74cef0e89c0d02fe422d5c57a1e692a0d68e4bc0"} Jan 22 10:42:34 crc kubenswrapper[4752]: I0122 10:42:34.722373 4752 generic.go:334] "Generic (PLEG): container finished" podID="789beb82-b859-4f18-9f1b-76e0e4beebe7" containerID="ff87f6819abc86ad5fa322bc74b7b3a84ea43b9af848a4922c36a1e33313989b" exitCode=0 Jan 22 10:42:34 crc kubenswrapper[4752]: I0122 10:42:34.722428 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58d8fd8d89-rrmt8" event={"ID":"789beb82-b859-4f18-9f1b-76e0e4beebe7","Type":"ContainerDied","Data":"ff87f6819abc86ad5fa322bc74b7b3a84ea43b9af848a4922c36a1e33313989b"} Jan 22 10:42:34 crc kubenswrapper[4752]: I0122 10:42:34.739337 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-6tj9z" podStartSLOduration=2.7392636230000003 podStartE2EDuration="2.739263623s" podCreationTimestamp="2026-01-22 10:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:42:34.733909992 +0000 UTC m=+1033.963852920" watchObservedRunningTime="2026-01-22 10:42:34.739263623 +0000 UTC m=+1033.969206531" Jan 22 10:42:35 crc kubenswrapper[4752]: I0122 10:42:35.088105 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58d8fd8d89-rrmt8" Jan 22 10:42:35 crc kubenswrapper[4752]: I0122 10:42:35.189640 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/789beb82-b859-4f18-9f1b-76e0e4beebe7-dns-svc\") pod \"789beb82-b859-4f18-9f1b-76e0e4beebe7\" (UID: \"789beb82-b859-4f18-9f1b-76e0e4beebe7\") " Jan 22 10:42:35 crc kubenswrapper[4752]: I0122 10:42:35.190012 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/789beb82-b859-4f18-9f1b-76e0e4beebe7-config\") pod \"789beb82-b859-4f18-9f1b-76e0e4beebe7\" (UID: \"789beb82-b859-4f18-9f1b-76e0e4beebe7\") " Jan 22 10:42:35 crc kubenswrapper[4752]: I0122 10:42:35.190065 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/789beb82-b859-4f18-9f1b-76e0e4beebe7-ovsdbserver-nb\") pod \"789beb82-b859-4f18-9f1b-76e0e4beebe7\" (UID: \"789beb82-b859-4f18-9f1b-76e0e4beebe7\") " Jan 22 10:42:35 crc kubenswrapper[4752]: I0122 10:42:35.190131 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pxd2\" (UniqueName: \"kubernetes.io/projected/789beb82-b859-4f18-9f1b-76e0e4beebe7-kube-api-access-5pxd2\") pod \"789beb82-b859-4f18-9f1b-76e0e4beebe7\" (UID: \"789beb82-b859-4f18-9f1b-76e0e4beebe7\") " Jan 22 10:42:35 crc kubenswrapper[4752]: I0122 10:42:35.194136 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/789beb82-b859-4f18-9f1b-76e0e4beebe7-kube-api-access-5pxd2" (OuterVolumeSpecName: "kube-api-access-5pxd2") pod "789beb82-b859-4f18-9f1b-76e0e4beebe7" (UID: "789beb82-b859-4f18-9f1b-76e0e4beebe7"). InnerVolumeSpecName "kube-api-access-5pxd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:42:35 crc kubenswrapper[4752]: I0122 10:42:35.213221 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/789beb82-b859-4f18-9f1b-76e0e4beebe7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "789beb82-b859-4f18-9f1b-76e0e4beebe7" (UID: "789beb82-b859-4f18-9f1b-76e0e4beebe7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:42:35 crc kubenswrapper[4752]: I0122 10:42:35.213798 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/789beb82-b859-4f18-9f1b-76e0e4beebe7-config" (OuterVolumeSpecName: "config") pod "789beb82-b859-4f18-9f1b-76e0e4beebe7" (UID: "789beb82-b859-4f18-9f1b-76e0e4beebe7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:42:35 crc kubenswrapper[4752]: I0122 10:42:35.214096 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/789beb82-b859-4f18-9f1b-76e0e4beebe7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "789beb82-b859-4f18-9f1b-76e0e4beebe7" (UID: "789beb82-b859-4f18-9f1b-76e0e4beebe7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:42:35 crc kubenswrapper[4752]: I0122 10:42:35.291748 4752 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/789beb82-b859-4f18-9f1b-76e0e4beebe7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:35 crc kubenswrapper[4752]: I0122 10:42:35.291781 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pxd2\" (UniqueName: \"kubernetes.io/projected/789beb82-b859-4f18-9f1b-76e0e4beebe7-kube-api-access-5pxd2\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:35 crc kubenswrapper[4752]: I0122 10:42:35.291791 4752 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/789beb82-b859-4f18-9f1b-76e0e4beebe7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:35 crc kubenswrapper[4752]: I0122 10:42:35.291799 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/789beb82-b859-4f18-9f1b-76e0e4beebe7-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:35 crc kubenswrapper[4752]: I0122 10:42:35.738732 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55ddfd5dfc-4mjcj" event={"ID":"8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e","Type":"ContainerStarted","Data":"42ac7c02f6c6b5a4960999b646809c2016231386685668322a157f0e60127f8f"} Jan 22 10:42:35 crc kubenswrapper[4752]: I0122 10:42:35.739944 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55ddfd5dfc-4mjcj" Jan 22 10:42:35 crc kubenswrapper[4752]: I0122 10:42:35.742028 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58d8fd8d89-rrmt8" Jan 22 10:42:35 crc kubenswrapper[4752]: I0122 10:42:35.743038 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58d8fd8d89-rrmt8" event={"ID":"789beb82-b859-4f18-9f1b-76e0e4beebe7","Type":"ContainerDied","Data":"73113fe238995db2a4525288279a1976e2563beb425a10d6bd3227af92e1de09"} Jan 22 10:42:35 crc kubenswrapper[4752]: I0122 10:42:35.743105 4752 scope.go:117] "RemoveContainer" containerID="ff87f6819abc86ad5fa322bc74b7b3a84ea43b9af848a4922c36a1e33313989b" Jan 22 10:42:35 crc kubenswrapper[4752]: I0122 10:42:35.765877 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55ddfd5dfc-4mjcj" podStartSLOduration=3.765837966 podStartE2EDuration="3.765837966s" podCreationTimestamp="2026-01-22 10:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:42:35.75765984 +0000 UTC m=+1034.987602758" watchObservedRunningTime="2026-01-22 10:42:35.765837966 +0000 UTC m=+1034.995780874" Jan 22 10:42:35 crc kubenswrapper[4752]: I0122 10:42:35.817758 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58d8fd8d89-rrmt8"] Jan 22 10:42:35 crc kubenswrapper[4752]: I0122 10:42:35.826098 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58d8fd8d89-rrmt8"] Jan 22 10:42:35 crc kubenswrapper[4752]: I0122 10:42:35.904781 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.069941 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.252886 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.496746 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 22 10:42:36 crc kubenswrapper[4752]: E0122 10:42:36.497161 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="789beb82-b859-4f18-9f1b-76e0e4beebe7" containerName="init" Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.497184 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="789beb82-b859-4f18-9f1b-76e0e4beebe7" containerName="init" Jan 22 10:42:36 crc kubenswrapper[4752]: E0122 10:42:36.497233 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a33a59-edcf-44fe-97c6-2a397c69c87a" containerName="swift-ring-rebalance" Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.497242 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a33a59-edcf-44fe-97c6-2a397c69c87a" containerName="swift-ring-rebalance" Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.497421 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="789beb82-b859-4f18-9f1b-76e0e4beebe7" containerName="init" Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.497449 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="61a33a59-edcf-44fe-97c6-2a397c69c87a" containerName="swift-ring-rebalance" Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.498495 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.500790 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.501065 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.501398 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.504174 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-4n8b6" Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.539528 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.632627 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c69c7c08-1b8d-43ad-aad3-bb292f64ad86-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c69c7c08-1b8d-43ad-aad3-bb292f64ad86\") " pod="openstack/ovn-northd-0" Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.632672 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf7js\" (UniqueName: \"kubernetes.io/projected/c69c7c08-1b8d-43ad-aad3-bb292f64ad86-kube-api-access-xf7js\") pod \"ovn-northd-0\" (UID: \"c69c7c08-1b8d-43ad-aad3-bb292f64ad86\") " pod="openstack/ovn-northd-0" Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.632792 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c69c7c08-1b8d-43ad-aad3-bb292f64ad86-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c69c7c08-1b8d-43ad-aad3-bb292f64ad86\") " pod="openstack/ovn-northd-0" Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.633057 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c69c7c08-1b8d-43ad-aad3-bb292f64ad86-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c69c7c08-1b8d-43ad-aad3-bb292f64ad86\") " pod="openstack/ovn-northd-0" Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.633125 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c69c7c08-1b8d-43ad-aad3-bb292f64ad86-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c69c7c08-1b8d-43ad-aad3-bb292f64ad86\") " pod="openstack/ovn-northd-0" Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.633208 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c69c7c08-1b8d-43ad-aad3-bb292f64ad86-config\") pod \"ovn-northd-0\" (UID: \"c69c7c08-1b8d-43ad-aad3-bb292f64ad86\") " pod="openstack/ovn-northd-0" Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.633289 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c69c7c08-1b8d-43ad-aad3-bb292f64ad86-scripts\") pod \"ovn-northd-0\" (UID: \"c69c7c08-1b8d-43ad-aad3-bb292f64ad86\") " pod="openstack/ovn-northd-0" Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.735048 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c69c7c08-1b8d-43ad-aad3-bb292f64ad86-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c69c7c08-1b8d-43ad-aad3-bb292f64ad86\") " pod="openstack/ovn-northd-0" Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.735132 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf7js\" (UniqueName: \"kubernetes.io/projected/c69c7c08-1b8d-43ad-aad3-bb292f64ad86-kube-api-access-xf7js\") pod \"ovn-northd-0\" (UID: \"c69c7c08-1b8d-43ad-aad3-bb292f64ad86\") " pod="openstack/ovn-northd-0" Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.735165 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c69c7c08-1b8d-43ad-aad3-bb292f64ad86-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c69c7c08-1b8d-43ad-aad3-bb292f64ad86\") " pod="openstack/ovn-northd-0" Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.735295 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c69c7c08-1b8d-43ad-aad3-bb292f64ad86-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c69c7c08-1b8d-43ad-aad3-bb292f64ad86\") " pod="openstack/ovn-northd-0" Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.735340 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c69c7c08-1b8d-43ad-aad3-bb292f64ad86-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c69c7c08-1b8d-43ad-aad3-bb292f64ad86\") " pod="openstack/ovn-northd-0" Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.735374 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c69c7c08-1b8d-43ad-aad3-bb292f64ad86-config\") pod \"ovn-northd-0\" (UID: \"c69c7c08-1b8d-43ad-aad3-bb292f64ad86\") " pod="openstack/ovn-northd-0" Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.735409 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c69c7c08-1b8d-43ad-aad3-bb292f64ad86-scripts\") pod \"ovn-northd-0\" (UID: \"c69c7c08-1b8d-43ad-aad3-bb292f64ad86\") " pod="openstack/ovn-northd-0" Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.736415 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c69c7c08-1b8d-43ad-aad3-bb292f64ad86-scripts\") pod \"ovn-northd-0\" (UID: \"c69c7c08-1b8d-43ad-aad3-bb292f64ad86\") " pod="openstack/ovn-northd-0" Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.738815 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c69c7c08-1b8d-43ad-aad3-bb292f64ad86-config\") pod \"ovn-northd-0\" (UID: \"c69c7c08-1b8d-43ad-aad3-bb292f64ad86\") " pod="openstack/ovn-northd-0" Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.740757 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c69c7c08-1b8d-43ad-aad3-bb292f64ad86-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c69c7c08-1b8d-43ad-aad3-bb292f64ad86\") " pod="openstack/ovn-northd-0" Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.742550 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c69c7c08-1b8d-43ad-aad3-bb292f64ad86-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c69c7c08-1b8d-43ad-aad3-bb292f64ad86\") " pod="openstack/ovn-northd-0" Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.742663 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c69c7c08-1b8d-43ad-aad3-bb292f64ad86-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c69c7c08-1b8d-43ad-aad3-bb292f64ad86\") " pod="openstack/ovn-northd-0" Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.744501 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c69c7c08-1b8d-43ad-aad3-bb292f64ad86-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c69c7c08-1b8d-43ad-aad3-bb292f64ad86\") " pod="openstack/ovn-northd-0" Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.756502 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf7js\" (UniqueName: \"kubernetes.io/projected/c69c7c08-1b8d-43ad-aad3-bb292f64ad86-kube-api-access-xf7js\") pod \"ovn-northd-0\" (UID: \"c69c7c08-1b8d-43ad-aad3-bb292f64ad86\") " pod="openstack/ovn-northd-0" Jan 22 10:42:36 crc kubenswrapper[4752]: I0122 10:42:36.851389 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 22 10:42:37 crc kubenswrapper[4752]: I0122 10:42:37.110631 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="789beb82-b859-4f18-9f1b-76e0e4beebe7" path="/var/lib/kubelet/pods/789beb82-b859-4f18-9f1b-76e0e4beebe7/volumes" Jan 22 10:42:37 crc kubenswrapper[4752]: W0122 10:42:37.286281 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc69c7c08_1b8d_43ad_aad3_bb292f64ad86.slice/crio-f27107ff21869435e94ddad72a19546952b83ebd9b3b9ff568ff820ae9586fba WatchSource:0}: Error finding container f27107ff21869435e94ddad72a19546952b83ebd9b3b9ff568ff820ae9586fba: Status 404 returned error can't find the container with id f27107ff21869435e94ddad72a19546952b83ebd9b3b9ff568ff820ae9586fba Jan 22 10:42:37 crc kubenswrapper[4752]: I0122 10:42:37.287108 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 22 10:42:37 crc kubenswrapper[4752]: I0122 10:42:37.762004 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c69c7c08-1b8d-43ad-aad3-bb292f64ad86","Type":"ContainerStarted","Data":"f27107ff21869435e94ddad72a19546952b83ebd9b3b9ff568ff820ae9586fba"} Jan 22 10:42:38 crc kubenswrapper[4752]: I0122 10:42:38.437913 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-6q7qn"] Jan 22 10:42:38 crc kubenswrapper[4752]: I0122 10:42:38.439963 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6q7qn" Jan 22 10:42:38 crc kubenswrapper[4752]: I0122 10:42:38.448371 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 22 10:42:38 crc kubenswrapper[4752]: I0122 10:42:38.470499 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6q7qn"] Jan 22 10:42:38 crc kubenswrapper[4752]: I0122 10:42:38.487597 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zgcg\" (UniqueName: \"kubernetes.io/projected/3789dae0-aca0-4d60-a98b-7970d675e0d0-kube-api-access-9zgcg\") pod \"root-account-create-update-6q7qn\" (UID: \"3789dae0-aca0-4d60-a98b-7970d675e0d0\") " pod="openstack/root-account-create-update-6q7qn" Jan 22 10:42:38 crc kubenswrapper[4752]: I0122 10:42:38.487800 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3789dae0-aca0-4d60-a98b-7970d675e0d0-operator-scripts\") pod \"root-account-create-update-6q7qn\" (UID: \"3789dae0-aca0-4d60-a98b-7970d675e0d0\") " pod="openstack/root-account-create-update-6q7qn" Jan 22 10:42:38 crc kubenswrapper[4752]: I0122 10:42:38.589703 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zgcg\" (UniqueName: \"kubernetes.io/projected/3789dae0-aca0-4d60-a98b-7970d675e0d0-kube-api-access-9zgcg\") pod \"root-account-create-update-6q7qn\" (UID: \"3789dae0-aca0-4d60-a98b-7970d675e0d0\") " pod="openstack/root-account-create-update-6q7qn" Jan 22 10:42:38 crc kubenswrapper[4752]: I0122 10:42:38.589854 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3789dae0-aca0-4d60-a98b-7970d675e0d0-operator-scripts\") pod \"root-account-create-update-6q7qn\" (UID: \"3789dae0-aca0-4d60-a98b-7970d675e0d0\") " pod="openstack/root-account-create-update-6q7qn" Jan 22 10:42:38 crc kubenswrapper[4752]: I0122 10:42:38.590796 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3789dae0-aca0-4d60-a98b-7970d675e0d0-operator-scripts\") pod \"root-account-create-update-6q7qn\" (UID: \"3789dae0-aca0-4d60-a98b-7970d675e0d0\") " pod="openstack/root-account-create-update-6q7qn" Jan 22 10:42:38 crc kubenswrapper[4752]: I0122 10:42:38.614481 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zgcg\" (UniqueName: \"kubernetes.io/projected/3789dae0-aca0-4d60-a98b-7970d675e0d0-kube-api-access-9zgcg\") pod \"root-account-create-update-6q7qn\" (UID: \"3789dae0-aca0-4d60-a98b-7970d675e0d0\") " pod="openstack/root-account-create-update-6q7qn" Jan 22 10:42:38 crc kubenswrapper[4752]: I0122 10:42:38.768388 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6q7qn" Jan 22 10:42:39 crc kubenswrapper[4752]: I0122 10:42:39.289531 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 22 10:42:39 crc kubenswrapper[4752]: I0122 10:42:39.482956 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 22 10:42:40 crc kubenswrapper[4752]: I0122 10:42:40.923667 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-hjjs9"] Jan 22 10:42:40 crc kubenswrapper[4752]: I0122 10:42:40.925377 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-hjjs9" Jan 22 10:42:40 crc kubenswrapper[4752]: I0122 10:42:40.936580 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-hjjs9"] Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.040920 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-67e9-account-create-update-8cjp4"] Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.042557 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-67e9-account-create-update-8cjp4" Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.046916 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.050501 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8wnp\" (UniqueName: \"kubernetes.io/projected/6b2dfe4d-751e-4896-9712-035e127f29ca-kube-api-access-b8wnp\") pod \"keystone-db-create-hjjs9\" (UID: \"6b2dfe4d-751e-4896-9712-035e127f29ca\") " pod="openstack/keystone-db-create-hjjs9" Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.050656 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b2dfe4d-751e-4896-9712-035e127f29ca-operator-scripts\") pod \"keystone-db-create-hjjs9\" (UID: \"6b2dfe4d-751e-4896-9712-035e127f29ca\") " pod="openstack/keystone-db-create-hjjs9" Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.055023 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-67e9-account-create-update-8cjp4"] Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.152328 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4-operator-scripts\") pod \"keystone-67e9-account-create-update-8cjp4\" (UID: \"3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4\") " pod="openstack/keystone-67e9-account-create-update-8cjp4" Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.152718 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5hmb\" (UniqueName: \"kubernetes.io/projected/3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4-kube-api-access-v5hmb\") pod \"keystone-67e9-account-create-update-8cjp4\" (UID: \"3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4\") " pod="openstack/keystone-67e9-account-create-update-8cjp4" Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.153418 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8wnp\" (UniqueName: \"kubernetes.io/projected/6b2dfe4d-751e-4896-9712-035e127f29ca-kube-api-access-b8wnp\") pod \"keystone-db-create-hjjs9\" (UID: \"6b2dfe4d-751e-4896-9712-035e127f29ca\") " pod="openstack/keystone-db-create-hjjs9" Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.153548 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b2dfe4d-751e-4896-9712-035e127f29ca-operator-scripts\") pod \"keystone-db-create-hjjs9\" (UID: \"6b2dfe4d-751e-4896-9712-035e127f29ca\") " pod="openstack/keystone-db-create-hjjs9" Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.154498 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b2dfe4d-751e-4896-9712-035e127f29ca-operator-scripts\") pod \"keystone-db-create-hjjs9\" (UID: \"6b2dfe4d-751e-4896-9712-035e127f29ca\") " pod="openstack/keystone-db-create-hjjs9" Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.178244 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8wnp\" (UniqueName: \"kubernetes.io/projected/6b2dfe4d-751e-4896-9712-035e127f29ca-kube-api-access-b8wnp\") pod \"keystone-db-create-hjjs9\" (UID: \"6b2dfe4d-751e-4896-9712-035e127f29ca\") " pod="openstack/keystone-db-create-hjjs9" Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.247187 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-hjjs9" Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.255232 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4-operator-scripts\") pod \"keystone-67e9-account-create-update-8cjp4\" (UID: \"3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4\") " pod="openstack/keystone-67e9-account-create-update-8cjp4" Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.255277 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5hmb\" (UniqueName: \"kubernetes.io/projected/3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4-kube-api-access-v5hmb\") pod \"keystone-67e9-account-create-update-8cjp4\" (UID: \"3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4\") " pod="openstack/keystone-67e9-account-create-update-8cjp4" Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.256306 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4-operator-scripts\") pod \"keystone-67e9-account-create-update-8cjp4\" (UID: \"3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4\") " pod="openstack/keystone-67e9-account-create-update-8cjp4" Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.273122 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5hmb\" (UniqueName: \"kubernetes.io/projected/3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4-kube-api-access-v5hmb\") pod \"keystone-67e9-account-create-update-8cjp4\" (UID: \"3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4\") " pod="openstack/keystone-67e9-account-create-update-8cjp4" Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.362056 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-67e9-account-create-update-8cjp4" Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.381975 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-rw8mj"] Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.383191 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rw8mj" Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.403847 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-rw8mj"] Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.466607 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c78780d-8ad0-45e5-86f1-c3cb7beccf0e-operator-scripts\") pod \"placement-db-create-rw8mj\" (UID: \"7c78780d-8ad0-45e5-86f1-c3cb7beccf0e\") " pod="openstack/placement-db-create-rw8mj" Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.466749 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29n8s\" (UniqueName: \"kubernetes.io/projected/7c78780d-8ad0-45e5-86f1-c3cb7beccf0e-kube-api-access-29n8s\") pod \"placement-db-create-rw8mj\" (UID: \"7c78780d-8ad0-45e5-86f1-c3cb7beccf0e\") " pod="openstack/placement-db-create-rw8mj" Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.519528 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-74ba-account-create-update-8hmpj"] Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.526839 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-74ba-account-create-update-8hmpj" Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.529388 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.534380 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-74ba-account-create-update-8hmpj"] Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.567600 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c78780d-8ad0-45e5-86f1-c3cb7beccf0e-operator-scripts\") pod \"placement-db-create-rw8mj\" (UID: \"7c78780d-8ad0-45e5-86f1-c3cb7beccf0e\") " pod="openstack/placement-db-create-rw8mj" Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.567675 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29n8s\" (UniqueName: \"kubernetes.io/projected/7c78780d-8ad0-45e5-86f1-c3cb7beccf0e-kube-api-access-29n8s\") pod \"placement-db-create-rw8mj\" (UID: \"7c78780d-8ad0-45e5-86f1-c3cb7beccf0e\") " pod="openstack/placement-db-create-rw8mj" Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.567703 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkv2r\" (UniqueName: \"kubernetes.io/projected/bd285c5e-d4b4-4402-97b9-aff12576faef-kube-api-access-lkv2r\") pod \"placement-74ba-account-create-update-8hmpj\" (UID: \"bd285c5e-d4b4-4402-97b9-aff12576faef\") " pod="openstack/placement-74ba-account-create-update-8hmpj" Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.567749 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd285c5e-d4b4-4402-97b9-aff12576faef-operator-scripts\") pod \"placement-74ba-account-create-update-8hmpj\" (UID: \"bd285c5e-d4b4-4402-97b9-aff12576faef\") " pod="openstack/placement-74ba-account-create-update-8hmpj" Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.569072 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c78780d-8ad0-45e5-86f1-c3cb7beccf0e-operator-scripts\") pod \"placement-db-create-rw8mj\" (UID: \"7c78780d-8ad0-45e5-86f1-c3cb7beccf0e\") " pod="openstack/placement-db-create-rw8mj" Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.584497 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29n8s\" (UniqueName: \"kubernetes.io/projected/7c78780d-8ad0-45e5-86f1-c3cb7beccf0e-kube-api-access-29n8s\") pod \"placement-db-create-rw8mj\" (UID: \"7c78780d-8ad0-45e5-86f1-c3cb7beccf0e\") " pod="openstack/placement-db-create-rw8mj" Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.668909 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkv2r\" (UniqueName: \"kubernetes.io/projected/bd285c5e-d4b4-4402-97b9-aff12576faef-kube-api-access-lkv2r\") pod \"placement-74ba-account-create-update-8hmpj\" (UID: \"bd285c5e-d4b4-4402-97b9-aff12576faef\") " pod="openstack/placement-74ba-account-create-update-8hmpj" Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.668981 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd285c5e-d4b4-4402-97b9-aff12576faef-operator-scripts\") pod \"placement-74ba-account-create-update-8hmpj\" (UID: \"bd285c5e-d4b4-4402-97b9-aff12576faef\") " pod="openstack/placement-74ba-account-create-update-8hmpj" Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.670067 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd285c5e-d4b4-4402-97b9-aff12576faef-operator-scripts\") pod \"placement-74ba-account-create-update-8hmpj\" (UID: \"bd285c5e-d4b4-4402-97b9-aff12576faef\") " pod="openstack/placement-74ba-account-create-update-8hmpj" Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.684435 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkv2r\" (UniqueName: \"kubernetes.io/projected/bd285c5e-d4b4-4402-97b9-aff12576faef-kube-api-access-lkv2r\") pod \"placement-74ba-account-create-update-8hmpj\" (UID: \"bd285c5e-d4b4-4402-97b9-aff12576faef\") " pod="openstack/placement-74ba-account-create-update-8hmpj" Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.754890 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rw8mj" Jan 22 10:42:41 crc kubenswrapper[4752]: I0122 10:42:41.848728 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-74ba-account-create-update-8hmpj" Jan 22 10:42:42 crc kubenswrapper[4752]: I0122 10:42:42.897154 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55ddfd5dfc-4mjcj" Jan 22 10:42:42 crc kubenswrapper[4752]: I0122 10:42:42.987395 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f9bf46d5-7xzb6"] Jan 22 10:42:42 crc kubenswrapper[4752]: I0122 10:42:42.987672 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f9bf46d5-7xzb6" podUID="4f26f240-ac69-44d2-9e8a-c119b4a9b8f4" containerName="dnsmasq-dns" containerID="cri-o://270612d024ba19436c3e4cc9709171bb8a6aea8c01e882181ca33201cb66c990" gracePeriod=10 Jan 22 10:42:43 crc kubenswrapper[4752]: I0122 10:42:43.782398 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-rr8lq"] Jan 22 10:42:43 crc kubenswrapper[4752]: I0122 10:42:43.783952 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-rr8lq" Jan 22 10:42:43 crc kubenswrapper[4752]: I0122 10:42:43.789150 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-rr8lq"] Jan 22 10:42:43 crc kubenswrapper[4752]: I0122 10:42:43.803047 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ff0ab31-0a6b-45a4-8a4d-484c75853276-operator-scripts\") pod \"watcher-db-create-rr8lq\" (UID: \"5ff0ab31-0a6b-45a4-8a4d-484c75853276\") " pod="openstack/watcher-db-create-rr8lq" Jan 22 10:42:43 crc kubenswrapper[4752]: I0122 10:42:43.803130 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg28h\" (UniqueName: \"kubernetes.io/projected/5ff0ab31-0a6b-45a4-8a4d-484c75853276-kube-api-access-gg28h\") pod \"watcher-db-create-rr8lq\" (UID: \"5ff0ab31-0a6b-45a4-8a4d-484c75853276\") " pod="openstack/watcher-db-create-rr8lq" Jan 22 10:42:43 crc kubenswrapper[4752]: I0122 10:42:43.833300 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6f9bf46d5-7xzb6" podUID="4f26f240-ac69-44d2-9e8a-c119b4a9b8f4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.117:5353: connect: connection refused" Jan 22 10:42:43 crc kubenswrapper[4752]: I0122 10:42:43.905276 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ff0ab31-0a6b-45a4-8a4d-484c75853276-operator-scripts\") pod \"watcher-db-create-rr8lq\" (UID: \"5ff0ab31-0a6b-45a4-8a4d-484c75853276\") " pod="openstack/watcher-db-create-rr8lq" Jan 22 10:42:43 crc kubenswrapper[4752]: I0122 10:42:43.905380 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg28h\" (UniqueName: \"kubernetes.io/projected/5ff0ab31-0a6b-45a4-8a4d-484c75853276-kube-api-access-gg28h\") pod \"watcher-db-create-rr8lq\" (UID: \"5ff0ab31-0a6b-45a4-8a4d-484c75853276\") " pod="openstack/watcher-db-create-rr8lq" Jan 22 10:42:43 crc kubenswrapper[4752]: I0122 10:42:43.906354 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ff0ab31-0a6b-45a4-8a4d-484c75853276-operator-scripts\") pod \"watcher-db-create-rr8lq\" (UID: \"5ff0ab31-0a6b-45a4-8a4d-484c75853276\") " pod="openstack/watcher-db-create-rr8lq" Jan 22 10:42:43 crc kubenswrapper[4752]: I0122 10:42:43.918572 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-0920-account-create-update-jpx4w"] Jan 22 10:42:43 crc kubenswrapper[4752]: I0122 10:42:43.919562 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-0920-account-create-update-jpx4w" Jan 22 10:42:43 crc kubenswrapper[4752]: I0122 10:42:43.923951 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Jan 22 10:42:43 crc kubenswrapper[4752]: I0122 10:42:43.924259 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg28h\" (UniqueName: \"kubernetes.io/projected/5ff0ab31-0a6b-45a4-8a4d-484c75853276-kube-api-access-gg28h\") pod \"watcher-db-create-rr8lq\" (UID: \"5ff0ab31-0a6b-45a4-8a4d-484c75853276\") " pod="openstack/watcher-db-create-rr8lq" Jan 22 10:42:43 crc kubenswrapper[4752]: I0122 10:42:43.939226 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-0920-account-create-update-jpx4w"] Jan 22 10:42:44 crc kubenswrapper[4752]: I0122 10:42:44.109574 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8zr9\" (UniqueName: \"kubernetes.io/projected/b6de9060-b65e-43cc-b492-e19b84135efb-kube-api-access-m8zr9\") pod \"watcher-0920-account-create-update-jpx4w\" (UID: \"b6de9060-b65e-43cc-b492-e19b84135efb\") " pod="openstack/watcher-0920-account-create-update-jpx4w" Jan 22 10:42:44 crc kubenswrapper[4752]: I0122 10:42:44.109746 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6de9060-b65e-43cc-b492-e19b84135efb-operator-scripts\") pod \"watcher-0920-account-create-update-jpx4w\" (UID: \"b6de9060-b65e-43cc-b492-e19b84135efb\") " pod="openstack/watcher-0920-account-create-update-jpx4w" Jan 22 10:42:44 crc kubenswrapper[4752]: I0122 10:42:44.125040 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-rr8lq" Jan 22 10:42:44 crc kubenswrapper[4752]: I0122 10:42:44.211167 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6de9060-b65e-43cc-b492-e19b84135efb-operator-scripts\") pod \"watcher-0920-account-create-update-jpx4w\" (UID: \"b6de9060-b65e-43cc-b492-e19b84135efb\") " pod="openstack/watcher-0920-account-create-update-jpx4w" Jan 22 10:42:44 crc kubenswrapper[4752]: I0122 10:42:44.211501 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8zr9\" (UniqueName: \"kubernetes.io/projected/b6de9060-b65e-43cc-b492-e19b84135efb-kube-api-access-m8zr9\") pod \"watcher-0920-account-create-update-jpx4w\" (UID: \"b6de9060-b65e-43cc-b492-e19b84135efb\") " pod="openstack/watcher-0920-account-create-update-jpx4w" Jan 22 10:42:44 crc kubenswrapper[4752]: I0122 10:42:44.211959 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6de9060-b65e-43cc-b492-e19b84135efb-operator-scripts\") pod \"watcher-0920-account-create-update-jpx4w\" (UID: \"b6de9060-b65e-43cc-b492-e19b84135efb\") " pod="openstack/watcher-0920-account-create-update-jpx4w" Jan 22 10:42:44 crc kubenswrapper[4752]: I0122 10:42:44.228499 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8zr9\" (UniqueName: \"kubernetes.io/projected/b6de9060-b65e-43cc-b492-e19b84135efb-kube-api-access-m8zr9\") pod \"watcher-0920-account-create-update-jpx4w\" (UID: \"b6de9060-b65e-43cc-b492-e19b84135efb\") " pod="openstack/watcher-0920-account-create-update-jpx4w" Jan 22 10:42:44 crc kubenswrapper[4752]: I0122 10:42:44.271417 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-0920-account-create-update-jpx4w" Jan 22 10:42:46 crc kubenswrapper[4752]: I0122 10:42:46.653711 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7b96c9ad-917b-4891-a39f-3f19c92bdd30-etc-swift\") pod \"swift-storage-0\" (UID: \"7b96c9ad-917b-4891-a39f-3f19c92bdd30\") " pod="openstack/swift-storage-0" Jan 22 10:42:46 crc kubenswrapper[4752]: I0122 10:42:46.660434 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7b96c9ad-917b-4891-a39f-3f19c92bdd30-etc-swift\") pod \"swift-storage-0\" (UID: \"7b96c9ad-917b-4891-a39f-3f19c92bdd30\") " pod="openstack/swift-storage-0" Jan 22 10:42:46 crc kubenswrapper[4752]: I0122 10:42:46.758382 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 22 10:42:47 crc kubenswrapper[4752]: I0122 10:42:47.864960 4752 generic.go:334] "Generic (PLEG): container finished" podID="4f26f240-ac69-44d2-9e8a-c119b4a9b8f4" containerID="270612d024ba19436c3e4cc9709171bb8a6aea8c01e882181ca33201cb66c990" exitCode=0 Jan 22 10:42:47 crc kubenswrapper[4752]: I0122 10:42:47.865035 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f9bf46d5-7xzb6" event={"ID":"4f26f240-ac69-44d2-9e8a-c119b4a9b8f4","Type":"ContainerDied","Data":"270612d024ba19436c3e4cc9709171bb8a6aea8c01e882181ca33201cb66c990"} Jan 22 10:42:48 crc kubenswrapper[4752]: I0122 10:42:48.832722 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6f9bf46d5-7xzb6" podUID="4f26f240-ac69-44d2-9e8a-c119b4a9b8f4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.117:5353: connect: connection refused" Jan 22 10:42:51 crc kubenswrapper[4752]: I0122 10:42:51.847553 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-vpt5c" podUID="7d696fd8-24f0-4e7a-801b-6376ea06f238" containerName="ovn-controller" probeResult="failure" output=< Jan 22 10:42:51 crc kubenswrapper[4752]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 22 10:42:51 crc kubenswrapper[4752]: > Jan 22 10:42:51 crc kubenswrapper[4752]: I0122 10:42:51.905193 4752 generic.go:334] "Generic (PLEG): container finished" podID="86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52" containerID="b552ddb1cbe37b0846222181a70e2b5fabda4c2e97937ec8510628d7daab0110" exitCode=0 Jan 22 10:42:51 crc kubenswrapper[4752]: I0122 10:42:51.905271 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52","Type":"ContainerDied","Data":"b552ddb1cbe37b0846222181a70e2b5fabda4c2e97937ec8510628d7daab0110"} Jan 22 10:42:53 crc kubenswrapper[4752]: E0122 10:42:53.438180 4752 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741" Jan 22 10:42:53 crc kubenswrapper[4752]: E0122 10:42:53.438735 4752 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus,Image:registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741,Command:[],Args:[--config.file=/etc/prometheus/config_out/prometheus.env.yaml --web.enable-lifecycle --web.enable-remote-write-receiver --web.route-prefix=/ --storage.tsdb.retention.time=24h --storage.tsdb.path=/prometheus --web.config.file=/etc/prometheus/web_config/web-config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9090,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-out,ReadOnly:true,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-assets,ReadOnly:true,MountPath:/etc/prometheus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-db,ReadOnly:false,MountPath:/prometheus,SubPath:prometheus-db,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-1,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-2,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/prometheus/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c85mq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/healthy,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:15,SuccessThreshold:1,FailureThreshold:60,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(9750781f-e5d3-4106-ac9e-431b017df583): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 10:42:53 crc kubenswrapper[4752]: E0122 10:42:53.700541 4752 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.32:5001/podified-master-centos10/openstack-ovn-northd:watcher_latest" Jan 22 10:42:53 crc kubenswrapper[4752]: E0122 10:42:53.700614 4752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.32:5001/podified-master-centos10/openstack-ovn-northd:watcher_latest" Jan 22 10:42:53 crc kubenswrapper[4752]: E0122 10:42:53.700886 4752 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-northd,Image:38.102.83.32:5001/podified-master-centos10/openstack-ovn-northd:watcher_latest,Command:[/usr/bin/ovn-northd],Args:[-vfile:off -vconsole:info --n-threads=1 --ovnnb-db=ssl:ovsdbserver-nb-0.openstack.svc.cluster.local:6641 --ovnsb-db=ssl:ovsdbserver-sb-0.openstack.svc.cluster.local:6642 --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n697hddh597hc4h5cch68dh4h554hc9h5bfhd7hbch85h68fh9fh569h9h67fh664h5bbh54h646h9ch594h695h579h59hfdh9h55bh684h668q,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:certs,Value:nbfh564h56bh5ddh59h8fh5bbh657h694h85h6fh67chdh77h54fhcfh57bhb5h4h5fbh97h7bh675h6dh6bh679h557h697h674h5ch7ch8bq,ValueFrom:nil,},EnvVar{Name:certs_metrics,Value:nf9h664h68ch54dh96h66h645h4h8h56h667h596hdfh5b6h5bfh687h5fdh6dhf8h5ffh57fh94hdhf9h5f4hbch5cdhcdh5dh659h646hfcq,ValueFrom:nil,},EnvVar{Name:ovnnorthd-config,Value:n5c8h7ch56bh8dh8hc4h5dch9dh68h6bhb7h598h549h5dbh66fh6bh5b4h5cch5d6h55ch57fhfch588h89h5ddh5d6h65bh65bh8dhc4h67dh569q,ValueFrom:nil,},EnvVar{Name:ovnnorthd-scripts,Value:n664hd8h66ch58dh64hc9h66bhd4h558h697h67bh557hdch664h567h669h555h696h556h556h5fh5bh569hbh665h9dh4h9bh564hc8h5b7h5c4q,ValueFrom:nil,},EnvVar{Name:tls-ca-bundle.pem,Value:nfdh56bhc7h675hd8h66ch574hbfh5c4hdhbbh58fhf8h658h5f8h58ch68ch674h647h5d4h5ddh655h6fh66hf6h684h5b8h9fh68fh5b9h646h549q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-northd-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-northd-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-northd-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf7js,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/status_check.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/status_check.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-northd-0_openstack(c69c7c08-1b8d-43ad-aad3-bb292f64ad86): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 10:42:53 crc kubenswrapper[4752]: I0122 10:42:53.872578 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f9bf46d5-7xzb6" Jan 22 10:42:53 crc kubenswrapper[4752]: I0122 10:42:53.913152 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj852\" (UniqueName: \"kubernetes.io/projected/4f26f240-ac69-44d2-9e8a-c119b4a9b8f4-kube-api-access-pj852\") pod \"4f26f240-ac69-44d2-9e8a-c119b4a9b8f4\" (UID: \"4f26f240-ac69-44d2-9e8a-c119b4a9b8f4\") " Jan 22 10:42:53 crc kubenswrapper[4752]: I0122 10:42:53.913358 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f26f240-ac69-44d2-9e8a-c119b4a9b8f4-dns-svc\") pod \"4f26f240-ac69-44d2-9e8a-c119b4a9b8f4\" (UID: \"4f26f240-ac69-44d2-9e8a-c119b4a9b8f4\") " Jan 22 10:42:53 crc kubenswrapper[4752]: I0122 10:42:53.913567 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f26f240-ac69-44d2-9e8a-c119b4a9b8f4-config\") pod \"4f26f240-ac69-44d2-9e8a-c119b4a9b8f4\" (UID: \"4f26f240-ac69-44d2-9e8a-c119b4a9b8f4\") " Jan 22 10:42:53 crc kubenswrapper[4752]: I0122 10:42:53.927288 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f26f240-ac69-44d2-9e8a-c119b4a9b8f4-kube-api-access-pj852" (OuterVolumeSpecName: "kube-api-access-pj852") pod "4f26f240-ac69-44d2-9e8a-c119b4a9b8f4" (UID: "4f26f240-ac69-44d2-9e8a-c119b4a9b8f4"). InnerVolumeSpecName "kube-api-access-pj852". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:42:53 crc kubenswrapper[4752]: E0122 10:42:53.983355 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f26f240-ac69-44d2-9e8a-c119b4a9b8f4-config podName:4f26f240-ac69-44d2-9e8a-c119b4a9b8f4 nodeName:}" failed. No retries permitted until 2026-01-22 10:42:54.483324583 +0000 UTC m=+1053.713267491 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config" (UniqueName: "kubernetes.io/configmap/4f26f240-ac69-44d2-9e8a-c119b4a9b8f4-config") pod "4f26f240-ac69-44d2-9e8a-c119b4a9b8f4" (UID: "4f26f240-ac69-44d2-9e8a-c119b4a9b8f4") : error deleting /var/lib/kubelet/pods/4f26f240-ac69-44d2-9e8a-c119b4a9b8f4/volume-subpaths: remove /var/lib/kubelet/pods/4f26f240-ac69-44d2-9e8a-c119b4a9b8f4/volume-subpaths: no such file or directory Jan 22 10:42:53 crc kubenswrapper[4752]: I0122 10:42:53.983636 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f26f240-ac69-44d2-9e8a-c119b4a9b8f4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4f26f240-ac69-44d2-9e8a-c119b4a9b8f4" (UID: "4f26f240-ac69-44d2-9e8a-c119b4a9b8f4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:42:53 crc kubenswrapper[4752]: I0122 10:42:53.984127 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f9bf46d5-7xzb6" Jan 22 10:42:53 crc kubenswrapper[4752]: I0122 10:42:53.984020 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f9bf46d5-7xzb6" event={"ID":"4f26f240-ac69-44d2-9e8a-c119b4a9b8f4","Type":"ContainerDied","Data":"4ec333b3999a4d9f624fbf0b2874eed9e12682a1dc82e403f83375c1e5961d06"} Jan 22 10:42:53 crc kubenswrapper[4752]: I0122 10:42:53.984941 4752 scope.go:117] "RemoveContainer" containerID="270612d024ba19436c3e4cc9709171bb8a6aea8c01e882181ca33201cb66c990" Jan 22 10:42:54 crc kubenswrapper[4752]: I0122 10:42:54.016030 4752 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f26f240-ac69-44d2-9e8a-c119b4a9b8f4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:54 crc kubenswrapper[4752]: I0122 10:42:54.016064 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj852\" (UniqueName: \"kubernetes.io/projected/4f26f240-ac69-44d2-9e8a-c119b4a9b8f4-kube-api-access-pj852\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:54 crc kubenswrapper[4752]: I0122 10:42:54.036325 4752 scope.go:117] "RemoveContainer" containerID="39b5775e187bd7f154652a27cbdabd622f9153607541a05ef33604b10fe848a9" Jan 22 10:42:54 crc kubenswrapper[4752]: I0122 10:42:54.182552 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-67e9-account-create-update-8cjp4"] Jan 22 10:42:54 crc kubenswrapper[4752]: W0122 10:42:54.191260 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a7e3fbf_6502_4604_bce5_72c2b5f5c1e4.slice/crio-c371587b3b5529047b7e09ebf54279720849691abeb7ebbfb47368e94572bc92 WatchSource:0}: Error finding container c371587b3b5529047b7e09ebf54279720849691abeb7ebbfb47368e94572bc92: Status 404 returned error can't find the container with id c371587b3b5529047b7e09ebf54279720849691abeb7ebbfb47368e94572bc92 Jan 22 10:42:54 crc kubenswrapper[4752]: I0122 10:42:54.427379 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-74ba-account-create-update-8hmpj"] Jan 22 10:42:54 crc kubenswrapper[4752]: I0122 10:42:54.447188 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-0920-account-create-update-jpx4w"] Jan 22 10:42:54 crc kubenswrapper[4752]: E0122 10:42:54.502484 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-northd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-northd-0" podUID="c69c7c08-1b8d-43ad-aad3-bb292f64ad86" Jan 22 10:42:54 crc kubenswrapper[4752]: I0122 10:42:54.524722 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f26f240-ac69-44d2-9e8a-c119b4a9b8f4-config\") pod \"4f26f240-ac69-44d2-9e8a-c119b4a9b8f4\" (UID: \"4f26f240-ac69-44d2-9e8a-c119b4a9b8f4\") " Jan 22 10:42:54 crc kubenswrapper[4752]: I0122 10:42:54.525757 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f26f240-ac69-44d2-9e8a-c119b4a9b8f4-config" (OuterVolumeSpecName: "config") pod "4f26f240-ac69-44d2-9e8a-c119b4a9b8f4" (UID: "4f26f240-ac69-44d2-9e8a-c119b4a9b8f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:42:54 crc kubenswrapper[4752]: I0122 10:42:54.551835 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 22 10:42:54 crc kubenswrapper[4752]: I0122 10:42:54.628944 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-hjjs9"] Jan 22 10:42:54 crc kubenswrapper[4752]: I0122 10:42:54.628999 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f26f240-ac69-44d2-9e8a-c119b4a9b8f4-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:54 crc kubenswrapper[4752]: I0122 10:42:54.649252 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-rr8lq"] Jan 22 10:42:54 crc kubenswrapper[4752]: I0122 10:42:54.669510 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6q7qn"] Jan 22 10:42:54 crc kubenswrapper[4752]: I0122 10:42:54.681124 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f9bf46d5-7xzb6"] Jan 22 10:42:54 crc kubenswrapper[4752]: I0122 10:42:54.691366 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f9bf46d5-7xzb6"] Jan 22 10:42:54 crc kubenswrapper[4752]: I0122 10:42:54.754041 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-rw8mj"] Jan 22 10:42:54 crc kubenswrapper[4752]: W0122 10:42:54.768556 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3789dae0_aca0_4d60_a98b_7970d675e0d0.slice/crio-9ab2afc28bc8e70405a48900ad67d5504fc3116744906a892ee91c76878c75c8 WatchSource:0}: Error finding container 9ab2afc28bc8e70405a48900ad67d5504fc3116744906a892ee91c76878c75c8: Status 404 returned error can't find the container with id 9ab2afc28bc8e70405a48900ad67d5504fc3116744906a892ee91c76878c75c8 Jan 22 10:42:54 crc kubenswrapper[4752]: W0122 10:42:54.776819 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c78780d_8ad0_45e5_86f1_c3cb7beccf0e.slice/crio-9ef5f6f489bb4f9d9fc054ef22c56cc719b3ed059ea3e3be45663756529a56a2 WatchSource:0}: Error finding container 9ef5f6f489bb4f9d9fc054ef22c56cc719b3ed059ea3e3be45663756529a56a2: Status 404 returned error can't find the container with id 9ef5f6f489bb4f9d9fc054ef22c56cc719b3ed059ea3e3be45663756529a56a2 Jan 22 10:42:54 crc kubenswrapper[4752]: I0122 10:42:54.995394 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-0920-account-create-update-jpx4w" event={"ID":"b6de9060-b65e-43cc-b492-e19b84135efb","Type":"ContainerStarted","Data":"475df6400bc4e609255324901aca9a5c548fa920c1cd506dc7facd2a04735674"} Jan 22 10:42:54 crc kubenswrapper[4752]: I0122 10:42:54.995743 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-0920-account-create-update-jpx4w" event={"ID":"b6de9060-b65e-43cc-b492-e19b84135efb","Type":"ContainerStarted","Data":"24c8967ae6c002cbb23bf2c062f5fa79e1ed8e534f5e4a181c6c5fc85841a455"} Jan 22 10:42:54 crc kubenswrapper[4752]: I0122 10:42:54.998541 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52","Type":"ContainerStarted","Data":"1b5ba6af172ae5f9bedb82b945f78a0e42a7b55eb9705883940015026f42522c"} Jan 22 10:42:54 crc kubenswrapper[4752]: I0122 10:42:54.998789 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 22 10:42:55 crc kubenswrapper[4752]: I0122 10:42:54.999894 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-hjjs9" event={"ID":"6b2dfe4d-751e-4896-9712-035e127f29ca","Type":"ContainerStarted","Data":"9626e8d48ea65bde987be02a31ed0dc20e0d191c0008f9d2e9ef114f50098b81"} Jan 22 10:42:55 crc kubenswrapper[4752]: I0122 10:42:54.999919 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-hjjs9" event={"ID":"6b2dfe4d-751e-4896-9712-035e127f29ca","Type":"ContainerStarted","Data":"37af413822afa2a036f8ff5f9b07c53cf86faa939a84565ee0b4c6dcd2b3110d"} Jan 22 10:42:55 crc kubenswrapper[4752]: I0122 10:42:55.001223 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-67e9-account-create-update-8cjp4" event={"ID":"3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4","Type":"ContainerStarted","Data":"fe676518b26d54eb615a490cdc2c63b87adee6aaebf8ff041e50ba35cd94704d"} Jan 22 10:42:55 crc kubenswrapper[4752]: I0122 10:42:55.001241 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-67e9-account-create-update-8cjp4" event={"ID":"3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4","Type":"ContainerStarted","Data":"c371587b3b5529047b7e09ebf54279720849691abeb7ebbfb47368e94572bc92"} Jan 22 10:42:55 crc kubenswrapper[4752]: I0122 10:42:55.002557 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6q7qn" event={"ID":"3789dae0-aca0-4d60-a98b-7970d675e0d0","Type":"ContainerStarted","Data":"9ab2afc28bc8e70405a48900ad67d5504fc3116744906a892ee91c76878c75c8"} Jan 22 10:42:55 crc kubenswrapper[4752]: I0122 10:42:55.004794 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rw8mj" event={"ID":"7c78780d-8ad0-45e5-86f1-c3cb7beccf0e","Type":"ContainerStarted","Data":"994f82c7b47f667f80b18e8f580ac5009f1c30c5b348ea2df229dc5a8dce1df9"} Jan 22 10:42:55 crc kubenswrapper[4752]: I0122 10:42:55.004821 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rw8mj" event={"ID":"7c78780d-8ad0-45e5-86f1-c3cb7beccf0e","Type":"ContainerStarted","Data":"9ef5f6f489bb4f9d9fc054ef22c56cc719b3ed059ea3e3be45663756529a56a2"} Jan 22 10:42:55 crc kubenswrapper[4752]: I0122 10:42:55.006217 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c69c7c08-1b8d-43ad-aad3-bb292f64ad86","Type":"ContainerStarted","Data":"c39516d7b638a1f18a9d2323d21b92bf3f5bf596c7af3dbf3b0a61d18029cb7c"} Jan 22 10:42:55 crc kubenswrapper[4752]: E0122 10:42:55.007266 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-northd\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.32:5001/podified-master-centos10/openstack-ovn-northd:watcher_latest\\\"\"" pod="openstack/ovn-northd-0" podUID="c69c7c08-1b8d-43ad-aad3-bb292f64ad86" Jan 22 10:42:55 crc kubenswrapper[4752]: I0122 10:42:55.009671 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-rr8lq" event={"ID":"5ff0ab31-0a6b-45a4-8a4d-484c75853276","Type":"ContainerStarted","Data":"7f9d3fb55d1958f9f516555df3bd2b0ad40bd5f1a2288d1604ceeff2ef318ab9"} Jan 22 10:42:55 crc kubenswrapper[4752]: I0122 10:42:55.009886 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-rr8lq" event={"ID":"5ff0ab31-0a6b-45a4-8a4d-484c75853276","Type":"ContainerStarted","Data":"f58156c667b5ab277c3246d307330d7cf6e1db4ed38b9bd4a446a2e0715c1841"} Jan 22 10:42:55 crc kubenswrapper[4752]: I0122 10:42:55.013664 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-0920-account-create-update-jpx4w" podStartSLOduration=12.013644187 podStartE2EDuration="12.013644187s" podCreationTimestamp="2026-01-22 10:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:42:55.008916723 +0000 UTC m=+1054.238859631" watchObservedRunningTime="2026-01-22 10:42:55.013644187 +0000 UTC m=+1054.243587095" Jan 22 10:42:55 crc kubenswrapper[4752]: I0122 10:42:55.015468 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74ba-account-create-update-8hmpj" event={"ID":"bd285c5e-d4b4-4402-97b9-aff12576faef","Type":"ContainerStarted","Data":"f2ba9671f2daab8912b37e1b7f6806600dc899e7b316d9d26c2ed45560608890"} Jan 22 10:42:55 crc kubenswrapper[4752]: I0122 10:42:55.015509 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74ba-account-create-update-8hmpj" event={"ID":"bd285c5e-d4b4-4402-97b9-aff12576faef","Type":"ContainerStarted","Data":"66a29843f555e525fc0ae9af5db886073afc442a39dfccc88356bf91ac59c4a3"} Jan 22 10:42:55 crc kubenswrapper[4752]: I0122 10:42:55.019646 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7b96c9ad-917b-4891-a39f-3f19c92bdd30","Type":"ContainerStarted","Data":"856c414570cbb46847d8c5d7d50ff7582760fad5df2110c6323fec53f073d763"} Jan 22 10:42:55 crc kubenswrapper[4752]: I0122 10:42:55.057288 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-67e9-account-create-update-8cjp4" podStartSLOduration=14.057266278 podStartE2EDuration="14.057266278s" podCreationTimestamp="2026-01-22 10:42:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:42:55.050165622 +0000 UTC m=+1054.280108530" watchObservedRunningTime="2026-01-22 10:42:55.057266278 +0000 UTC m=+1054.287209186" Jan 22 10:42:55 crc kubenswrapper[4752]: I0122 10:42:55.072323 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-rw8mj" podStartSLOduration=14.072304571 podStartE2EDuration="14.072304571s" podCreationTimestamp="2026-01-22 10:42:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:42:55.066663904 +0000 UTC m=+1054.296606812" watchObservedRunningTime="2026-01-22 10:42:55.072304571 +0000 UTC m=+1054.302247479" Jan 22 10:42:55 crc kubenswrapper[4752]: I0122 10:42:55.110566 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f26f240-ac69-44d2-9e8a-c119b4a9b8f4" path="/var/lib/kubelet/pods/4f26f240-ac69-44d2-9e8a-c119b4a9b8f4/volumes" Jan 22 10:42:55 crc kubenswrapper[4752]: I0122 10:42:55.122654 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-hjjs9" podStartSLOduration=15.122636448 podStartE2EDuration="15.122636448s" podCreationTimestamp="2026-01-22 10:42:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:42:55.116928508 +0000 UTC m=+1054.346871426" watchObservedRunningTime="2026-01-22 10:42:55.122636448 +0000 UTC m=+1054.352579356" Jan 22 10:42:55 crc kubenswrapper[4752]: I0122 10:42:55.130670 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=60.375906145 podStartE2EDuration="1m9.130624647s" podCreationTimestamp="2026-01-22 10:41:46 +0000 UTC" firstStartedPulling="2026-01-22 10:42:05.251758445 +0000 UTC m=+1004.481701343" lastFinishedPulling="2026-01-22 10:42:14.006476947 +0000 UTC m=+1013.236419845" observedRunningTime="2026-01-22 10:42:55.096202146 +0000 UTC m=+1054.326145054" watchObservedRunningTime="2026-01-22 10:42:55.130624647 +0000 UTC m=+1054.360567575" Jan 22 10:42:55 crc kubenswrapper[4752]: I0122 10:42:55.139258 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-create-rr8lq" podStartSLOduration=12.139244032 podStartE2EDuration="12.139244032s" podCreationTimestamp="2026-01-22 10:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:42:55.13648564 +0000 UTC m=+1054.366428548" watchObservedRunningTime="2026-01-22 10:42:55.139244032 +0000 UTC m=+1054.369186940" Jan 22 10:42:55 crc kubenswrapper[4752]: I0122 10:42:55.159246 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-74ba-account-create-update-8hmpj" podStartSLOduration=14.159229345 podStartE2EDuration="14.159229345s" podCreationTimestamp="2026-01-22 10:42:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:42:55.155463897 +0000 UTC m=+1054.385406805" watchObservedRunningTime="2026-01-22 10:42:55.159229345 +0000 UTC m=+1054.389172253" Jan 22 10:42:56 crc kubenswrapper[4752]: I0122 10:42:56.032294 4752 generic.go:334] "Generic (PLEG): container finished" podID="6b2dfe4d-751e-4896-9712-035e127f29ca" containerID="9626e8d48ea65bde987be02a31ed0dc20e0d191c0008f9d2e9ef114f50098b81" exitCode=0 Jan 22 10:42:56 crc kubenswrapper[4752]: I0122 10:42:56.032410 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-hjjs9" event={"ID":"6b2dfe4d-751e-4896-9712-035e127f29ca","Type":"ContainerDied","Data":"9626e8d48ea65bde987be02a31ed0dc20e0d191c0008f9d2e9ef114f50098b81"} Jan 22 10:42:56 crc kubenswrapper[4752]: I0122 10:42:56.034172 4752 generic.go:334] "Generic (PLEG): container finished" podID="3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4" containerID="fe676518b26d54eb615a490cdc2c63b87adee6aaebf8ff041e50ba35cd94704d" exitCode=0 Jan 22 10:42:56 crc kubenswrapper[4752]: I0122 10:42:56.034224 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-67e9-account-create-update-8cjp4" event={"ID":"3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4","Type":"ContainerDied","Data":"fe676518b26d54eb615a490cdc2c63b87adee6aaebf8ff041e50ba35cd94704d"} Jan 22 10:42:56 crc kubenswrapper[4752]: I0122 10:42:56.037327 4752 generic.go:334] "Generic (PLEG): container finished" podID="5ff0ab31-0a6b-45a4-8a4d-484c75853276" containerID="7f9d3fb55d1958f9f516555df3bd2b0ad40bd5f1a2288d1604ceeff2ef318ab9" exitCode=0 Jan 22 10:42:56 crc kubenswrapper[4752]: I0122 10:42:56.037388 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-rr8lq" event={"ID":"5ff0ab31-0a6b-45a4-8a4d-484c75853276","Type":"ContainerDied","Data":"7f9d3fb55d1958f9f516555df3bd2b0ad40bd5f1a2288d1604ceeff2ef318ab9"} Jan 22 10:42:56 crc kubenswrapper[4752]: I0122 10:42:56.039262 4752 generic.go:334] "Generic (PLEG): container finished" podID="b6de9060-b65e-43cc-b492-e19b84135efb" containerID="475df6400bc4e609255324901aca9a5c548fa920c1cd506dc7facd2a04735674" exitCode=0 Jan 22 10:42:56 crc kubenswrapper[4752]: I0122 10:42:56.039329 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-0920-account-create-update-jpx4w" event={"ID":"b6de9060-b65e-43cc-b492-e19b84135efb","Type":"ContainerDied","Data":"475df6400bc4e609255324901aca9a5c548fa920c1cd506dc7facd2a04735674"} Jan 22 10:42:56 crc kubenswrapper[4752]: I0122 10:42:56.040842 4752 generic.go:334] "Generic (PLEG): container finished" podID="bd285c5e-d4b4-4402-97b9-aff12576faef" containerID="f2ba9671f2daab8912b37e1b7f6806600dc899e7b316d9d26c2ed45560608890" exitCode=0 Jan 22 10:42:56 crc kubenswrapper[4752]: I0122 10:42:56.040894 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74ba-account-create-update-8hmpj" event={"ID":"bd285c5e-d4b4-4402-97b9-aff12576faef","Type":"ContainerDied","Data":"f2ba9671f2daab8912b37e1b7f6806600dc899e7b316d9d26c2ed45560608890"} Jan 22 10:42:56 crc kubenswrapper[4752]: I0122 10:42:56.042464 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7b96c9ad-917b-4891-a39f-3f19c92bdd30","Type":"ContainerStarted","Data":"7e1596b3437b94e9ab75c5140e15b17531a6161ef95e0ddf6947bdd03030edad"} Jan 22 10:42:56 crc kubenswrapper[4752]: I0122 10:42:56.044394 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6q7qn" event={"ID":"3789dae0-aca0-4d60-a98b-7970d675e0d0","Type":"ContainerStarted","Data":"7ce3f8a252ee36861a8bb577959e97074508170c52921da8f95ba06cc33b8cf8"} Jan 22 10:42:56 crc kubenswrapper[4752]: I0122 10:42:56.046566 4752 generic.go:334] "Generic (PLEG): container finished" podID="7c78780d-8ad0-45e5-86f1-c3cb7beccf0e" containerID="994f82c7b47f667f80b18e8f580ac5009f1c30c5b348ea2df229dc5a8dce1df9" exitCode=0 Jan 22 10:42:56 crc kubenswrapper[4752]: I0122 10:42:56.046650 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rw8mj" event={"ID":"7c78780d-8ad0-45e5-86f1-c3cb7beccf0e","Type":"ContainerDied","Data":"994f82c7b47f667f80b18e8f580ac5009f1c30c5b348ea2df229dc5a8dce1df9"} Jan 22 10:42:56 crc kubenswrapper[4752]: E0122 10:42:56.048041 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-northd\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.32:5001/podified-master-centos10/openstack-ovn-northd:watcher_latest\\\"\"" pod="openstack/ovn-northd-0" podUID="c69c7c08-1b8d-43ad-aad3-bb292f64ad86" Jan 22 10:42:56 crc kubenswrapper[4752]: I0122 10:42:56.173778 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-6q7qn" podStartSLOduration=18.173759995 podStartE2EDuration="18.173759995s" podCreationTimestamp="2026-01-22 10:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:42:56.152374395 +0000 UTC m=+1055.382317303" watchObservedRunningTime="2026-01-22 10:42:56.173759995 +0000 UTC m=+1055.403702903" Jan 22 10:42:56 crc kubenswrapper[4752]: I0122 10:42:56.867607 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-vpt5c" podUID="7d696fd8-24f0-4e7a-801b-6376ea06f238" containerName="ovn-controller" probeResult="failure" output=< Jan 22 10:42:56 crc kubenswrapper[4752]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 22 10:42:56 crc kubenswrapper[4752]: > Jan 22 10:42:56 crc kubenswrapper[4752]: I0122 10:42:56.998830 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-z6f8t" Jan 22 10:42:57 crc kubenswrapper[4752]: I0122 10:42:57.066491 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7b96c9ad-917b-4891-a39f-3f19c92bdd30","Type":"ContainerStarted","Data":"e63393872176803cf49d6efb17ba6a16f2e2b7ed97fea375b6b8d461d16c73d1"} Jan 22 10:42:57 crc kubenswrapper[4752]: I0122 10:42:57.066540 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7b96c9ad-917b-4891-a39f-3f19c92bdd30","Type":"ContainerStarted","Data":"5fb023a5d73ff4a1f178c2634c51331c7c2483869fce375d0a968ffe08170491"} Jan 22 10:42:57 crc kubenswrapper[4752]: I0122 10:42:57.066549 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7b96c9ad-917b-4891-a39f-3f19c92bdd30","Type":"ContainerStarted","Data":"4e35136814dae34ed3ae276a93de979e57d23551f97d6f5e7408edc1ebca9c67"} Jan 22 10:42:57 crc kubenswrapper[4752]: I0122 10:42:57.068007 4752 generic.go:334] "Generic (PLEG): container finished" podID="3789dae0-aca0-4d60-a98b-7970d675e0d0" containerID="7ce3f8a252ee36861a8bb577959e97074508170c52921da8f95ba06cc33b8cf8" exitCode=0 Jan 22 10:42:57 crc kubenswrapper[4752]: I0122 10:42:57.068085 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6q7qn" event={"ID":"3789dae0-aca0-4d60-a98b-7970d675e0d0","Type":"ContainerDied","Data":"7ce3f8a252ee36861a8bb577959e97074508170c52921da8f95ba06cc33b8cf8"} Jan 22 10:42:57 crc kubenswrapper[4752]: I0122 10:42:57.076984 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9750781f-e5d3-4106-ac9e-431b017df583","Type":"ContainerStarted","Data":"57537c585f30b507b452c4fda8255d7785f54efc4247a2458aba85e17537ff34"} Jan 22 10:42:57 crc kubenswrapper[4752]: I0122 10:42:57.501465 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rw8mj" Jan 22 10:42:57 crc kubenswrapper[4752]: I0122 10:42:57.603428 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c78780d-8ad0-45e5-86f1-c3cb7beccf0e-operator-scripts\") pod \"7c78780d-8ad0-45e5-86f1-c3cb7beccf0e\" (UID: \"7c78780d-8ad0-45e5-86f1-c3cb7beccf0e\") " Jan 22 10:42:57 crc kubenswrapper[4752]: I0122 10:42:57.603489 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29n8s\" (UniqueName: \"kubernetes.io/projected/7c78780d-8ad0-45e5-86f1-c3cb7beccf0e-kube-api-access-29n8s\") pod \"7c78780d-8ad0-45e5-86f1-c3cb7beccf0e\" (UID: \"7c78780d-8ad0-45e5-86f1-c3cb7beccf0e\") " Jan 22 10:42:57 crc kubenswrapper[4752]: I0122 10:42:57.604460 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c78780d-8ad0-45e5-86f1-c3cb7beccf0e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7c78780d-8ad0-45e5-86f1-c3cb7beccf0e" (UID: "7c78780d-8ad0-45e5-86f1-c3cb7beccf0e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:42:57 crc kubenswrapper[4752]: I0122 10:42:57.642043 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c78780d-8ad0-45e5-86f1-c3cb7beccf0e-kube-api-access-29n8s" (OuterVolumeSpecName: "kube-api-access-29n8s") pod "7c78780d-8ad0-45e5-86f1-c3cb7beccf0e" (UID: "7c78780d-8ad0-45e5-86f1-c3cb7beccf0e"). InnerVolumeSpecName "kube-api-access-29n8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:42:57 crc kubenswrapper[4752]: I0122 10:42:57.710124 4752 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c78780d-8ad0-45e5-86f1-c3cb7beccf0e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:57 crc kubenswrapper[4752]: I0122 10:42:57.710171 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29n8s\" (UniqueName: \"kubernetes.io/projected/7c78780d-8ad0-45e5-86f1-c3cb7beccf0e-kube-api-access-29n8s\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:57 crc kubenswrapper[4752]: I0122 10:42:57.725844 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:42:57 crc kubenswrapper[4752]: I0122 10:42:57.725907 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:42:57 crc kubenswrapper[4752]: I0122 10:42:57.725952 4752 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 10:42:57 crc kubenswrapper[4752]: I0122 10:42:57.726562 4752 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2e87e3a6ca557c47aa1a29b28c97952e66d28f228a2a925e37de3714e751682c"} pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 10:42:57 crc kubenswrapper[4752]: I0122 10:42:57.726614 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" containerID="cri-o://2e87e3a6ca557c47aa1a29b28c97952e66d28f228a2a925e37de3714e751682c" gracePeriod=600 Jan 22 10:42:57 crc kubenswrapper[4752]: I0122 10:42:57.928549 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-67e9-account-create-update-8cjp4" Jan 22 10:42:57 crc kubenswrapper[4752]: I0122 10:42:57.935034 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-0920-account-create-update-jpx4w" Jan 22 10:42:57 crc kubenswrapper[4752]: I0122 10:42:57.951302 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-rr8lq" Jan 22 10:42:57 crc kubenswrapper[4752]: I0122 10:42:57.957342 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-hjjs9" Jan 22 10:42:57 crc kubenswrapper[4752]: I0122 10:42:57.962316 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-74ba-account-create-update-8hmpj" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.019749 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8zr9\" (UniqueName: \"kubernetes.io/projected/b6de9060-b65e-43cc-b492-e19b84135efb-kube-api-access-m8zr9\") pod \"b6de9060-b65e-43cc-b492-e19b84135efb\" (UID: \"b6de9060-b65e-43cc-b492-e19b84135efb\") " Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.021958 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg28h\" (UniqueName: \"kubernetes.io/projected/5ff0ab31-0a6b-45a4-8a4d-484c75853276-kube-api-access-gg28h\") pod \"5ff0ab31-0a6b-45a4-8a4d-484c75853276\" (UID: \"5ff0ab31-0a6b-45a4-8a4d-484c75853276\") " Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.022112 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b2dfe4d-751e-4896-9712-035e127f29ca-operator-scripts\") pod \"6b2dfe4d-751e-4896-9712-035e127f29ca\" (UID: \"6b2dfe4d-751e-4896-9712-035e127f29ca\") " Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.022254 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5hmb\" (UniqueName: \"kubernetes.io/projected/3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4-kube-api-access-v5hmb\") pod \"3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4\" (UID: \"3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4\") " Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.022377 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd285c5e-d4b4-4402-97b9-aff12576faef-operator-scripts\") pod \"bd285c5e-d4b4-4402-97b9-aff12576faef\" (UID: \"bd285c5e-d4b4-4402-97b9-aff12576faef\") " Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.022780 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8wnp\" (UniqueName: \"kubernetes.io/projected/6b2dfe4d-751e-4896-9712-035e127f29ca-kube-api-access-b8wnp\") pod \"6b2dfe4d-751e-4896-9712-035e127f29ca\" (UID: \"6b2dfe4d-751e-4896-9712-035e127f29ca\") " Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.022911 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4-operator-scripts\") pod \"3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4\" (UID: \"3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4\") " Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.023054 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6de9060-b65e-43cc-b492-e19b84135efb-operator-scripts\") pod \"b6de9060-b65e-43cc-b492-e19b84135efb\" (UID: \"b6de9060-b65e-43cc-b492-e19b84135efb\") " Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.023415 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkv2r\" (UniqueName: \"kubernetes.io/projected/bd285c5e-d4b4-4402-97b9-aff12576faef-kube-api-access-lkv2r\") pod \"bd285c5e-d4b4-4402-97b9-aff12576faef\" (UID: \"bd285c5e-d4b4-4402-97b9-aff12576faef\") " Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.023541 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ff0ab31-0a6b-45a4-8a4d-484c75853276-operator-scripts\") pod \"5ff0ab31-0a6b-45a4-8a4d-484c75853276\" (UID: \"5ff0ab31-0a6b-45a4-8a4d-484c75853276\") " Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.023244 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b2dfe4d-751e-4896-9712-035e127f29ca-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6b2dfe4d-751e-4896-9712-035e127f29ca" (UID: "6b2dfe4d-751e-4896-9712-035e127f29ca"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.026979 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd285c5e-d4b4-4402-97b9-aff12576faef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bd285c5e-d4b4-4402-97b9-aff12576faef" (UID: "bd285c5e-d4b4-4402-97b9-aff12576faef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.028551 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ff0ab31-0a6b-45a4-8a4d-484c75853276-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5ff0ab31-0a6b-45a4-8a4d-484c75853276" (UID: "5ff0ab31-0a6b-45a4-8a4d-484c75853276"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.028905 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ff0ab31-0a6b-45a4-8a4d-484c75853276-kube-api-access-gg28h" (OuterVolumeSpecName: "kube-api-access-gg28h") pod "5ff0ab31-0a6b-45a4-8a4d-484c75853276" (UID: "5ff0ab31-0a6b-45a4-8a4d-484c75853276"). InnerVolumeSpecName "kube-api-access-gg28h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.029020 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4-kube-api-access-v5hmb" (OuterVolumeSpecName: "kube-api-access-v5hmb") pod "3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4" (UID: "3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4"). InnerVolumeSpecName "kube-api-access-v5hmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.029113 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6de9060-b65e-43cc-b492-e19b84135efb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b6de9060-b65e-43cc-b492-e19b84135efb" (UID: "b6de9060-b65e-43cc-b492-e19b84135efb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.029571 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4" (UID: "3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.032482 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd285c5e-d4b4-4402-97b9-aff12576faef-kube-api-access-lkv2r" (OuterVolumeSpecName: "kube-api-access-lkv2r") pod "bd285c5e-d4b4-4402-97b9-aff12576faef" (UID: "bd285c5e-d4b4-4402-97b9-aff12576faef"). InnerVolumeSpecName "kube-api-access-lkv2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.033658 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b2dfe4d-751e-4896-9712-035e127f29ca-kube-api-access-b8wnp" (OuterVolumeSpecName: "kube-api-access-b8wnp") pod "6b2dfe4d-751e-4896-9712-035e127f29ca" (UID: "6b2dfe4d-751e-4896-9712-035e127f29ca"). InnerVolumeSpecName "kube-api-access-b8wnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.034446 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6de9060-b65e-43cc-b492-e19b84135efb-kube-api-access-m8zr9" (OuterVolumeSpecName: "kube-api-access-m8zr9") pod "b6de9060-b65e-43cc-b492-e19b84135efb" (UID: "b6de9060-b65e-43cc-b492-e19b84135efb"). InnerVolumeSpecName "kube-api-access-m8zr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.087242 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-0920-account-create-update-jpx4w" event={"ID":"b6de9060-b65e-43cc-b492-e19b84135efb","Type":"ContainerDied","Data":"24c8967ae6c002cbb23bf2c062f5fa79e1ed8e534f5e4a181c6c5fc85841a455"} Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.087522 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24c8967ae6c002cbb23bf2c062f5fa79e1ed8e534f5e4a181c6c5fc85841a455" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.087626 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-0920-account-create-update-jpx4w" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.091005 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-74ba-account-create-update-8hmpj" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.091307 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74ba-account-create-update-8hmpj" event={"ID":"bd285c5e-d4b4-4402-97b9-aff12576faef","Type":"ContainerDied","Data":"66a29843f555e525fc0ae9af5db886073afc442a39dfccc88356bf91ac59c4a3"} Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.091366 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66a29843f555e525fc0ae9af5db886073afc442a39dfccc88356bf91ac59c4a3" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.093083 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rw8mj" event={"ID":"7c78780d-8ad0-45e5-86f1-c3cb7beccf0e","Type":"ContainerDied","Data":"9ef5f6f489bb4f9d9fc054ef22c56cc719b3ed059ea3e3be45663756529a56a2"} Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.093120 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ef5f6f489bb4f9d9fc054ef22c56cc719b3ed059ea3e3be45663756529a56a2" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.093284 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rw8mj" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.103771 4752 generic.go:334] "Generic (PLEG): container finished" podID="9356406a-3c6e-4af1-a8bb-92244286ba39" containerID="142644851c495509c91062f475336d914e85547dba46c81d6a1978054f164e2a" exitCode=0 Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.103826 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9356406a-3c6e-4af1-a8bb-92244286ba39","Type":"ContainerDied","Data":"142644851c495509c91062f475336d914e85547dba46c81d6a1978054f164e2a"} Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.108585 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-hjjs9" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.108590 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-hjjs9" event={"ID":"6b2dfe4d-751e-4896-9712-035e127f29ca","Type":"ContainerDied","Data":"37af413822afa2a036f8ff5f9b07c53cf86faa939a84565ee0b4c6dcd2b3110d"} Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.108659 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37af413822afa2a036f8ff5f9b07c53cf86faa939a84565ee0b4c6dcd2b3110d" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.110775 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-67e9-account-create-update-8cjp4" event={"ID":"3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4","Type":"ContainerDied","Data":"c371587b3b5529047b7e09ebf54279720849691abeb7ebbfb47368e94572bc92"} Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.110804 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c371587b3b5529047b7e09ebf54279720849691abeb7ebbfb47368e94572bc92" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.110896 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-67e9-account-create-update-8cjp4" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.114037 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-rr8lq" event={"ID":"5ff0ab31-0a6b-45a4-8a4d-484c75853276","Type":"ContainerDied","Data":"f58156c667b5ab277c3246d307330d7cf6e1db4ed38b9bd4a446a2e0715c1841"} Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.114072 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f58156c667b5ab277c3246d307330d7cf6e1db4ed38b9bd4a446a2e0715c1841" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.114167 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-rr8lq" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.125445 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8zr9\" (UniqueName: \"kubernetes.io/projected/b6de9060-b65e-43cc-b492-e19b84135efb-kube-api-access-m8zr9\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.125474 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg28h\" (UniqueName: \"kubernetes.io/projected/5ff0ab31-0a6b-45a4-8a4d-484c75853276-kube-api-access-gg28h\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.125498 4752 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b2dfe4d-751e-4896-9712-035e127f29ca-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.125512 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5hmb\" (UniqueName: \"kubernetes.io/projected/3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4-kube-api-access-v5hmb\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.125524 4752 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd285c5e-d4b4-4402-97b9-aff12576faef-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.125536 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8wnp\" (UniqueName: \"kubernetes.io/projected/6b2dfe4d-751e-4896-9712-035e127f29ca-kube-api-access-b8wnp\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.125547 4752 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.125559 4752 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6de9060-b65e-43cc-b492-e19b84135efb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.125571 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkv2r\" (UniqueName: \"kubernetes.io/projected/bd285c5e-d4b4-4402-97b9-aff12576faef-kube-api-access-lkv2r\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.125583 4752 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ff0ab31-0a6b-45a4-8a4d-484c75853276-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.394727 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6q7qn" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.430815 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zgcg\" (UniqueName: \"kubernetes.io/projected/3789dae0-aca0-4d60-a98b-7970d675e0d0-kube-api-access-9zgcg\") pod \"3789dae0-aca0-4d60-a98b-7970d675e0d0\" (UID: \"3789dae0-aca0-4d60-a98b-7970d675e0d0\") " Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.431060 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3789dae0-aca0-4d60-a98b-7970d675e0d0-operator-scripts\") pod \"3789dae0-aca0-4d60-a98b-7970d675e0d0\" (UID: \"3789dae0-aca0-4d60-a98b-7970d675e0d0\") " Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.431551 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3789dae0-aca0-4d60-a98b-7970d675e0d0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3789dae0-aca0-4d60-a98b-7970d675e0d0" (UID: "3789dae0-aca0-4d60-a98b-7970d675e0d0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.434377 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3789dae0-aca0-4d60-a98b-7970d675e0d0-kube-api-access-9zgcg" (OuterVolumeSpecName: "kube-api-access-9zgcg") pod "3789dae0-aca0-4d60-a98b-7970d675e0d0" (UID: "3789dae0-aca0-4d60-a98b-7970d675e0d0"). InnerVolumeSpecName "kube-api-access-9zgcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.533404 4752 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3789dae0-aca0-4d60-a98b-7970d675e0d0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.533756 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zgcg\" (UniqueName: \"kubernetes.io/projected/3789dae0-aca0-4d60-a98b-7970d675e0d0-kube-api-access-9zgcg\") on node \"crc\" DevicePath \"\"" Jan 22 10:42:58 crc kubenswrapper[4752]: I0122 10:42:58.833648 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6f9bf46d5-7xzb6" podUID="4f26f240-ac69-44d2-9e8a-c119b4a9b8f4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.117:5353: i/o timeout" Jan 22 10:42:59 crc kubenswrapper[4752]: I0122 10:42:59.124792 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6q7qn" event={"ID":"3789dae0-aca0-4d60-a98b-7970d675e0d0","Type":"ContainerDied","Data":"9ab2afc28bc8e70405a48900ad67d5504fc3116744906a892ee91c76878c75c8"} Jan 22 10:42:59 crc kubenswrapper[4752]: I0122 10:42:59.124830 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ab2afc28bc8e70405a48900ad67d5504fc3116744906a892ee91c76878c75c8" Jan 22 10:42:59 crc kubenswrapper[4752]: I0122 10:42:59.124898 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6q7qn" Jan 22 10:42:59 crc kubenswrapper[4752]: I0122 10:42:59.798591 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-6q7qn"] Jan 22 10:42:59 crc kubenswrapper[4752]: I0122 10:42:59.806978 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-6q7qn"] Jan 22 10:43:01 crc kubenswrapper[4752]: I0122 10:43:01.134011 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3789dae0-aca0-4d60-a98b-7970d675e0d0" path="/var/lib/kubelet/pods/3789dae0-aca0-4d60-a98b-7970d675e0d0/volumes" Jan 22 10:43:01 crc kubenswrapper[4752]: I0122 10:43:01.756653 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-s65hc" podUID="c3a36876-437a-44a1-b61c-ea81f242b231" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 22 10:43:01 crc kubenswrapper[4752]: I0122 10:43:01.843588 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-vpt5c" podUID="7d696fd8-24f0-4e7a-801b-6376ea06f238" containerName="ovn-controller" probeResult="failure" output=< Jan 22 10:43:01 crc kubenswrapper[4752]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 22 10:43:01 crc kubenswrapper[4752]: > Jan 22 10:43:01 crc kubenswrapper[4752]: I0122 10:43:01.929792 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-z6f8t" Jan 22 10:43:02 crc kubenswrapper[4752]: I0122 10:43:02.677148 4752 generic.go:334] "Generic (PLEG): container finished" podID="eb8df70c-9474-4827-8831-f39fc6883d79" containerID="2e87e3a6ca557c47aa1a29b28c97952e66d28f228a2a925e37de3714e751682c" exitCode=0 Jan 22 10:43:02 crc kubenswrapper[4752]: I0122 10:43:02.677246 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerDied","Data":"2e87e3a6ca557c47aa1a29b28c97952e66d28f228a2a925e37de3714e751682c"} Jan 22 10:43:02 crc kubenswrapper[4752]: I0122 10:43:02.677417 4752 scope.go:117] "RemoveContainer" containerID="042ed95c4b3ff840b51322a9e98a655ce91eb46ad1b15d6ef52fd539d78d7a7d" Jan 22 10:43:02 crc kubenswrapper[4752]: I0122 10:43:02.773484 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-vpt5c-config-tf276"] Jan 22 10:43:02 crc kubenswrapper[4752]: E0122 10:43:02.773829 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ff0ab31-0a6b-45a4-8a4d-484c75853276" containerName="mariadb-database-create" Jan 22 10:43:02 crc kubenswrapper[4752]: I0122 10:43:02.773847 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ff0ab31-0a6b-45a4-8a4d-484c75853276" containerName="mariadb-database-create" Jan 22 10:43:02 crc kubenswrapper[4752]: E0122 10:43:02.773884 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd285c5e-d4b4-4402-97b9-aff12576faef" containerName="mariadb-account-create-update" Jan 22 10:43:02 crc kubenswrapper[4752]: I0122 10:43:02.773892 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd285c5e-d4b4-4402-97b9-aff12576faef" containerName="mariadb-account-create-update" Jan 22 10:43:02 crc kubenswrapper[4752]: E0122 10:43:02.773908 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4" containerName="mariadb-account-create-update" Jan 22 10:43:02 crc kubenswrapper[4752]: I0122 10:43:02.773914 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4" containerName="mariadb-account-create-update" Jan 22 10:43:02 crc kubenswrapper[4752]: E0122 10:43:02.773945 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3789dae0-aca0-4d60-a98b-7970d675e0d0" containerName="mariadb-account-create-update" Jan 22 10:43:02 crc kubenswrapper[4752]: I0122 10:43:02.773952 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="3789dae0-aca0-4d60-a98b-7970d675e0d0" containerName="mariadb-account-create-update" Jan 22 10:43:02 crc kubenswrapper[4752]: E0122 10:43:02.773965 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6de9060-b65e-43cc-b492-e19b84135efb" containerName="mariadb-account-create-update" Jan 22 10:43:02 crc kubenswrapper[4752]: I0122 10:43:02.773972 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6de9060-b65e-43cc-b492-e19b84135efb" containerName="mariadb-account-create-update" Jan 22 10:43:02 crc kubenswrapper[4752]: E0122 10:43:02.773982 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b2dfe4d-751e-4896-9712-035e127f29ca" containerName="mariadb-database-create" Jan 22 10:43:02 crc kubenswrapper[4752]: I0122 10:43:02.773988 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b2dfe4d-751e-4896-9712-035e127f29ca" containerName="mariadb-database-create" Jan 22 10:43:02 crc kubenswrapper[4752]: E0122 10:43:02.773997 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f26f240-ac69-44d2-9e8a-c119b4a9b8f4" containerName="dnsmasq-dns" Jan 22 10:43:02 crc kubenswrapper[4752]: I0122 10:43:02.774003 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f26f240-ac69-44d2-9e8a-c119b4a9b8f4" containerName="dnsmasq-dns" Jan 22 10:43:02 crc kubenswrapper[4752]: E0122 10:43:02.774015 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f26f240-ac69-44d2-9e8a-c119b4a9b8f4" containerName="init" Jan 22 10:43:02 crc kubenswrapper[4752]: I0122 10:43:02.774021 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f26f240-ac69-44d2-9e8a-c119b4a9b8f4" containerName="init" Jan 22 10:43:02 crc kubenswrapper[4752]: E0122 10:43:02.774041 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c78780d-8ad0-45e5-86f1-c3cb7beccf0e" containerName="mariadb-database-create" Jan 22 10:43:02 crc kubenswrapper[4752]: I0122 10:43:02.774047 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c78780d-8ad0-45e5-86f1-c3cb7beccf0e" containerName="mariadb-database-create" Jan 22 10:43:02 crc kubenswrapper[4752]: I0122 10:43:02.774382 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b2dfe4d-751e-4896-9712-035e127f29ca" containerName="mariadb-database-create" Jan 22 10:43:02 crc kubenswrapper[4752]: I0122 10:43:02.774401 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c78780d-8ad0-45e5-86f1-c3cb7beccf0e" containerName="mariadb-database-create" Jan 22 10:43:02 crc kubenswrapper[4752]: I0122 10:43:02.774414 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd285c5e-d4b4-4402-97b9-aff12576faef" containerName="mariadb-account-create-update" Jan 22 10:43:02 crc kubenswrapper[4752]: I0122 10:43:02.774422 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4" containerName="mariadb-account-create-update" Jan 22 10:43:02 crc kubenswrapper[4752]: I0122 10:43:02.774432 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ff0ab31-0a6b-45a4-8a4d-484c75853276" containerName="mariadb-database-create" Jan 22 10:43:02 crc kubenswrapper[4752]: I0122 10:43:02.774440 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="3789dae0-aca0-4d60-a98b-7970d675e0d0" containerName="mariadb-account-create-update" Jan 22 10:43:02 crc kubenswrapper[4752]: I0122 10:43:02.774449 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f26f240-ac69-44d2-9e8a-c119b4a9b8f4" containerName="dnsmasq-dns" Jan 22 10:43:02 crc kubenswrapper[4752]: I0122 10:43:02.774458 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6de9060-b65e-43cc-b492-e19b84135efb" containerName="mariadb-account-create-update" Jan 22 10:43:02 crc kubenswrapper[4752]: I0122 10:43:02.775072 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vpt5c-config-tf276" Jan 22 10:43:02 crc kubenswrapper[4752]: I0122 10:43:02.779197 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 22 10:43:02 crc kubenswrapper[4752]: I0122 10:43:02.801595 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vpt5c-config-tf276"] Jan 22 10:43:02 crc kubenswrapper[4752]: I0122 10:43:02.940808 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/82bb6b73-c388-40f4-974e-dcfac30d7ef9-var-run\") pod \"ovn-controller-vpt5c-config-tf276\" (UID: \"82bb6b73-c388-40f4-974e-dcfac30d7ef9\") " pod="openstack/ovn-controller-vpt5c-config-tf276" Jan 22 10:43:02 crc kubenswrapper[4752]: I0122 10:43:02.940865 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82bb6b73-c388-40f4-974e-dcfac30d7ef9-scripts\") pod \"ovn-controller-vpt5c-config-tf276\" (UID: \"82bb6b73-c388-40f4-974e-dcfac30d7ef9\") " pod="openstack/ovn-controller-vpt5c-config-tf276" Jan 22 10:43:02 crc kubenswrapper[4752]: I0122 10:43:02.940911 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/82bb6b73-c388-40f4-974e-dcfac30d7ef9-var-run-ovn\") pod \"ovn-controller-vpt5c-config-tf276\" (UID: \"82bb6b73-c388-40f4-974e-dcfac30d7ef9\") " pod="openstack/ovn-controller-vpt5c-config-tf276" Jan 22 10:43:02 crc kubenswrapper[4752]: I0122 10:43:02.940935 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/82bb6b73-c388-40f4-974e-dcfac30d7ef9-var-log-ovn\") pod \"ovn-controller-vpt5c-config-tf276\" (UID: \"82bb6b73-c388-40f4-974e-dcfac30d7ef9\") " pod="openstack/ovn-controller-vpt5c-config-tf276" Jan 22 10:43:02 crc kubenswrapper[4752]: I0122 10:43:02.940955 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x8qk\" (UniqueName: \"kubernetes.io/projected/82bb6b73-c388-40f4-974e-dcfac30d7ef9-kube-api-access-7x8qk\") pod \"ovn-controller-vpt5c-config-tf276\" (UID: \"82bb6b73-c388-40f4-974e-dcfac30d7ef9\") " pod="openstack/ovn-controller-vpt5c-config-tf276" Jan 22 10:43:02 crc kubenswrapper[4752]: I0122 10:43:02.941187 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/82bb6b73-c388-40f4-974e-dcfac30d7ef9-additional-scripts\") pod \"ovn-controller-vpt5c-config-tf276\" (UID: \"82bb6b73-c388-40f4-974e-dcfac30d7ef9\") " pod="openstack/ovn-controller-vpt5c-config-tf276" Jan 22 10:43:03 crc kubenswrapper[4752]: I0122 10:43:03.042547 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/82bb6b73-c388-40f4-974e-dcfac30d7ef9-var-run\") pod \"ovn-controller-vpt5c-config-tf276\" (UID: \"82bb6b73-c388-40f4-974e-dcfac30d7ef9\") " pod="openstack/ovn-controller-vpt5c-config-tf276" Jan 22 10:43:03 crc kubenswrapper[4752]: I0122 10:43:03.042603 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82bb6b73-c388-40f4-974e-dcfac30d7ef9-scripts\") pod \"ovn-controller-vpt5c-config-tf276\" (UID: \"82bb6b73-c388-40f4-974e-dcfac30d7ef9\") " pod="openstack/ovn-controller-vpt5c-config-tf276" Jan 22 10:43:03 crc kubenswrapper[4752]: I0122 10:43:03.042667 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/82bb6b73-c388-40f4-974e-dcfac30d7ef9-var-run-ovn\") pod \"ovn-controller-vpt5c-config-tf276\" (UID: \"82bb6b73-c388-40f4-974e-dcfac30d7ef9\") " pod="openstack/ovn-controller-vpt5c-config-tf276" Jan 22 10:43:03 crc kubenswrapper[4752]: I0122 10:43:03.042697 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/82bb6b73-c388-40f4-974e-dcfac30d7ef9-var-log-ovn\") pod \"ovn-controller-vpt5c-config-tf276\" (UID: \"82bb6b73-c388-40f4-974e-dcfac30d7ef9\") " pod="openstack/ovn-controller-vpt5c-config-tf276" Jan 22 10:43:03 crc kubenswrapper[4752]: I0122 10:43:03.042750 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x8qk\" (UniqueName: \"kubernetes.io/projected/82bb6b73-c388-40f4-974e-dcfac30d7ef9-kube-api-access-7x8qk\") pod \"ovn-controller-vpt5c-config-tf276\" (UID: \"82bb6b73-c388-40f4-974e-dcfac30d7ef9\") " pod="openstack/ovn-controller-vpt5c-config-tf276" Jan 22 10:43:03 crc kubenswrapper[4752]: I0122 10:43:03.042812 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/82bb6b73-c388-40f4-974e-dcfac30d7ef9-additional-scripts\") pod \"ovn-controller-vpt5c-config-tf276\" (UID: \"82bb6b73-c388-40f4-974e-dcfac30d7ef9\") " pod="openstack/ovn-controller-vpt5c-config-tf276" Jan 22 10:43:03 crc kubenswrapper[4752]: I0122 10:43:03.043259 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/82bb6b73-c388-40f4-974e-dcfac30d7ef9-var-log-ovn\") pod \"ovn-controller-vpt5c-config-tf276\" (UID: \"82bb6b73-c388-40f4-974e-dcfac30d7ef9\") " pod="openstack/ovn-controller-vpt5c-config-tf276" Jan 22 10:43:03 crc kubenswrapper[4752]: I0122 10:43:03.043260 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/82bb6b73-c388-40f4-974e-dcfac30d7ef9-var-run\") pod \"ovn-controller-vpt5c-config-tf276\" (UID: \"82bb6b73-c388-40f4-974e-dcfac30d7ef9\") " pod="openstack/ovn-controller-vpt5c-config-tf276" Jan 22 10:43:03 crc kubenswrapper[4752]: I0122 10:43:03.043431 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/82bb6b73-c388-40f4-974e-dcfac30d7ef9-var-run-ovn\") pod \"ovn-controller-vpt5c-config-tf276\" (UID: \"82bb6b73-c388-40f4-974e-dcfac30d7ef9\") " pod="openstack/ovn-controller-vpt5c-config-tf276" Jan 22 10:43:03 crc kubenswrapper[4752]: I0122 10:43:03.043623 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/82bb6b73-c388-40f4-974e-dcfac30d7ef9-additional-scripts\") pod \"ovn-controller-vpt5c-config-tf276\" (UID: \"82bb6b73-c388-40f4-974e-dcfac30d7ef9\") " pod="openstack/ovn-controller-vpt5c-config-tf276" Jan 22 10:43:03 crc kubenswrapper[4752]: I0122 10:43:03.044882 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82bb6b73-c388-40f4-974e-dcfac30d7ef9-scripts\") pod \"ovn-controller-vpt5c-config-tf276\" (UID: \"82bb6b73-c388-40f4-974e-dcfac30d7ef9\") " pod="openstack/ovn-controller-vpt5c-config-tf276" Jan 22 10:43:03 crc kubenswrapper[4752]: I0122 10:43:03.069472 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x8qk\" (UniqueName: \"kubernetes.io/projected/82bb6b73-c388-40f4-974e-dcfac30d7ef9-kube-api-access-7x8qk\") pod \"ovn-controller-vpt5c-config-tf276\" (UID: \"82bb6b73-c388-40f4-974e-dcfac30d7ef9\") " pod="openstack/ovn-controller-vpt5c-config-tf276" Jan 22 10:43:03 crc kubenswrapper[4752]: I0122 10:43:03.106354 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vpt5c-config-tf276" Jan 22 10:43:03 crc kubenswrapper[4752]: I0122 10:43:03.490600 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-7ztsl"] Jan 22 10:43:03 crc kubenswrapper[4752]: I0122 10:43:03.491813 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7ztsl" Jan 22 10:43:03 crc kubenswrapper[4752]: I0122 10:43:03.495155 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 22 10:43:03 crc kubenswrapper[4752]: I0122 10:43:03.517126 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-7ztsl"] Jan 22 10:43:03 crc kubenswrapper[4752]: I0122 10:43:03.655364 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46rqr\" (UniqueName: \"kubernetes.io/projected/a7f8123c-1194-4076-80a2-00a8e881e384-kube-api-access-46rqr\") pod \"root-account-create-update-7ztsl\" (UID: \"a7f8123c-1194-4076-80a2-00a8e881e384\") " pod="openstack/root-account-create-update-7ztsl" Jan 22 10:43:03 crc kubenswrapper[4752]: I0122 10:43:03.655810 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7f8123c-1194-4076-80a2-00a8e881e384-operator-scripts\") pod \"root-account-create-update-7ztsl\" (UID: \"a7f8123c-1194-4076-80a2-00a8e881e384\") " pod="openstack/root-account-create-update-7ztsl" Jan 22 10:43:03 crc kubenswrapper[4752]: I0122 10:43:03.703227 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerStarted","Data":"98fa078dac5ca30a46bf92bf45d8fc8b321a6f93f3d0f79aa40474301ba963e0"} Jan 22 10:43:03 crc kubenswrapper[4752]: I0122 10:43:03.705312 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vpt5c-config-tf276"] Jan 22 10:43:03 crc kubenswrapper[4752]: W0122 10:43:03.713474 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82bb6b73_c388_40f4_974e_dcfac30d7ef9.slice/crio-f8b712a2dedee37d0d68ad86278313cb39f0e9eb59ae0b53a3a82d13ed1381ac WatchSource:0}: Error finding container f8b712a2dedee37d0d68ad86278313cb39f0e9eb59ae0b53a3a82d13ed1381ac: Status 404 returned error can't find the container with id f8b712a2dedee37d0d68ad86278313cb39f0e9eb59ae0b53a3a82d13ed1381ac Jan 22 10:43:03 crc kubenswrapper[4752]: I0122 10:43:03.713782 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7b96c9ad-917b-4891-a39f-3f19c92bdd30","Type":"ContainerStarted","Data":"1b420971d7a28d6adf28b304ad5464646f1ab719c099f5308bed1d82520660a7"} Jan 22 10:43:03 crc kubenswrapper[4752]: I0122 10:43:03.726201 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9356406a-3c6e-4af1-a8bb-92244286ba39","Type":"ContainerStarted","Data":"d0999d11a8ab5b45c1bedd6f8b324decb051608cf2bac2b53cde37e431c0559a"} Jan 22 10:43:03 crc kubenswrapper[4752]: I0122 10:43:03.727187 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:43:03 crc kubenswrapper[4752]: I0122 10:43:03.736198 4752 generic.go:334] "Generic (PLEG): container finished" podID="76dee6bc-ab39-4f6c-bc31-6ef18020e5f3" containerID="4c31a2ea966d9d658601982b507b6cfee498c25edc562fe2047f227515e40f2b" exitCode=0 Jan 22 10:43:03 crc kubenswrapper[4752]: I0122 10:43:03.736252 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3","Type":"ContainerDied","Data":"4c31a2ea966d9d658601982b507b6cfee498c25edc562fe2047f227515e40f2b"} Jan 22 10:43:03 crc kubenswrapper[4752]: I0122 10:43:03.758873 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46rqr\" (UniqueName: \"kubernetes.io/projected/a7f8123c-1194-4076-80a2-00a8e881e384-kube-api-access-46rqr\") pod \"root-account-create-update-7ztsl\" (UID: \"a7f8123c-1194-4076-80a2-00a8e881e384\") " pod="openstack/root-account-create-update-7ztsl" Jan 22 10:43:03 crc kubenswrapper[4752]: I0122 10:43:03.758941 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7f8123c-1194-4076-80a2-00a8e881e384-operator-scripts\") pod \"root-account-create-update-7ztsl\" (UID: \"a7f8123c-1194-4076-80a2-00a8e881e384\") " pod="openstack/root-account-create-update-7ztsl" Jan 22 10:43:03 crc kubenswrapper[4752]: I0122 10:43:03.759514 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7f8123c-1194-4076-80a2-00a8e881e384-operator-scripts\") pod \"root-account-create-update-7ztsl\" (UID: \"a7f8123c-1194-4076-80a2-00a8e881e384\") " pod="openstack/root-account-create-update-7ztsl" Jan 22 10:43:03 crc kubenswrapper[4752]: I0122 10:43:03.780109 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371959.074694 podStartE2EDuration="1m17.780082318s" podCreationTimestamp="2026-01-22 10:41:46 +0000 UTC" firstStartedPulling="2026-01-22 10:42:05.345952647 +0000 UTC m=+1004.575895555" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:43:03.770296972 +0000 UTC m=+1063.000239880" watchObservedRunningTime="2026-01-22 10:43:03.780082318 +0000 UTC m=+1063.010025226" Jan 22 10:43:03 crc kubenswrapper[4752]: I0122 10:43:03.803496 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46rqr\" (UniqueName: \"kubernetes.io/projected/a7f8123c-1194-4076-80a2-00a8e881e384-kube-api-access-46rqr\") pod \"root-account-create-update-7ztsl\" (UID: \"a7f8123c-1194-4076-80a2-00a8e881e384\") " pod="openstack/root-account-create-update-7ztsl" Jan 22 10:43:03 crc kubenswrapper[4752]: I0122 10:43:03.833889 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7ztsl" Jan 22 10:43:04 crc kubenswrapper[4752]: I0122 10:43:04.492789 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-7ztsl"] Jan 22 10:43:04 crc kubenswrapper[4752]: I0122 10:43:04.769272 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"76dee6bc-ab39-4f6c-bc31-6ef18020e5f3","Type":"ContainerStarted","Data":"d8fc35aba1b2103ace5b46fa7c4b53ede0aa97e2c9a92fe78c2e940f44067b76"} Jan 22 10:43:04 crc kubenswrapper[4752]: I0122 10:43:04.769778 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:43:04 crc kubenswrapper[4752]: I0122 10:43:04.779041 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7ztsl" event={"ID":"a7f8123c-1194-4076-80a2-00a8e881e384","Type":"ContainerStarted","Data":"f190a407e45ac14ceea991235475b5bf17828e3fc9bd6b9751171b143b0e5339"} Jan 22 10:43:04 crc kubenswrapper[4752]: I0122 10:43:04.811469 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-notifications-server-0" podStartSLOduration=-9223371958.043331 podStartE2EDuration="1m18.811444829s" podCreationTimestamp="2026-01-22 10:41:46 +0000 UTC" firstStartedPulling="2026-01-22 10:42:05.346001008 +0000 UTC m=+1004.575943916" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:43:04.80691974 +0000 UTC m=+1064.036862648" watchObservedRunningTime="2026-01-22 10:43:04.811444829 +0000 UTC m=+1064.041387737" Jan 22 10:43:04 crc kubenswrapper[4752]: I0122 10:43:04.823222 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7b96c9ad-917b-4891-a39f-3f19c92bdd30","Type":"ContainerStarted","Data":"bcf3d69e7108d7b38a799f96d1385b57e121229c24818f36bc98e2df84b08f99"} Jan 22 10:43:04 crc kubenswrapper[4752]: I0122 10:43:04.823266 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7b96c9ad-917b-4891-a39f-3f19c92bdd30","Type":"ContainerStarted","Data":"68ff33c71cda560a4c335a1e4cf4f6f3e2f1d75d2930d7c01e93191e2ccfdbab"} Jan 22 10:43:04 crc kubenswrapper[4752]: I0122 10:43:04.823278 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7b96c9ad-917b-4891-a39f-3f19c92bdd30","Type":"ContainerStarted","Data":"e7d2d63f17769bea6272ac058dd632b2fd3b4fbbe1db2fbf798260646bab2337"} Jan 22 10:43:04 crc kubenswrapper[4752]: I0122 10:43:04.826472 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vpt5c-config-tf276" event={"ID":"82bb6b73-c388-40f4-974e-dcfac30d7ef9","Type":"ContainerStarted","Data":"2ddac350eb56090abadddec2596487408847b69da6506066f2aff9cd7068cc57"} Jan 22 10:43:04 crc kubenswrapper[4752]: I0122 10:43:04.826500 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vpt5c-config-tf276" event={"ID":"82bb6b73-c388-40f4-974e-dcfac30d7ef9","Type":"ContainerStarted","Data":"f8b712a2dedee37d0d68ad86278313cb39f0e9eb59ae0b53a3a82d13ed1381ac"} Jan 22 10:43:04 crc kubenswrapper[4752]: I0122 10:43:04.873041 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-vpt5c-config-tf276" podStartSLOduration=2.8730191400000002 podStartE2EDuration="2.87301914s" podCreationTimestamp="2026-01-22 10:43:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:43:04.863750907 +0000 UTC m=+1064.093693815" watchObservedRunningTime="2026-01-22 10:43:04.87301914 +0000 UTC m=+1064.102962048" Jan 22 10:43:05 crc kubenswrapper[4752]: I0122 10:43:05.839277 4752 generic.go:334] "Generic (PLEG): container finished" podID="a7f8123c-1194-4076-80a2-00a8e881e384" containerID="995568dee6a7fcd3329dc9f8b98154d1c2411479a918e6f78544486382e2212a" exitCode=0 Jan 22 10:43:05 crc kubenswrapper[4752]: I0122 10:43:05.839731 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7ztsl" event={"ID":"a7f8123c-1194-4076-80a2-00a8e881e384","Type":"ContainerDied","Data":"995568dee6a7fcd3329dc9f8b98154d1c2411479a918e6f78544486382e2212a"} Jan 22 10:43:05 crc kubenswrapper[4752]: I0122 10:43:05.841935 4752 generic.go:334] "Generic (PLEG): container finished" podID="82bb6b73-c388-40f4-974e-dcfac30d7ef9" containerID="2ddac350eb56090abadddec2596487408847b69da6506066f2aff9cd7068cc57" exitCode=0 Jan 22 10:43:05 crc kubenswrapper[4752]: I0122 10:43:05.842154 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vpt5c-config-tf276" event={"ID":"82bb6b73-c388-40f4-974e-dcfac30d7ef9","Type":"ContainerDied","Data":"2ddac350eb56090abadddec2596487408847b69da6506066f2aff9cd7068cc57"} Jan 22 10:43:06 crc kubenswrapper[4752]: I0122 10:43:06.863361 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-vpt5c" Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.634915 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7ztsl" Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.668437 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vpt5c-config-tf276" Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.735818 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.105:5671: connect: connection refused" Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.753555 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x8qk\" (UniqueName: \"kubernetes.io/projected/82bb6b73-c388-40f4-974e-dcfac30d7ef9-kube-api-access-7x8qk\") pod \"82bb6b73-c388-40f4-974e-dcfac30d7ef9\" (UID: \"82bb6b73-c388-40f4-974e-dcfac30d7ef9\") " Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.753619 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7f8123c-1194-4076-80a2-00a8e881e384-operator-scripts\") pod \"a7f8123c-1194-4076-80a2-00a8e881e384\" (UID: \"a7f8123c-1194-4076-80a2-00a8e881e384\") " Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.753643 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/82bb6b73-c388-40f4-974e-dcfac30d7ef9-var-run-ovn\") pod \"82bb6b73-c388-40f4-974e-dcfac30d7ef9\" (UID: \"82bb6b73-c388-40f4-974e-dcfac30d7ef9\") " Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.753710 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82bb6b73-c388-40f4-974e-dcfac30d7ef9-scripts\") pod \"82bb6b73-c388-40f4-974e-dcfac30d7ef9\" (UID: \"82bb6b73-c388-40f4-974e-dcfac30d7ef9\") " Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.753753 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/82bb6b73-c388-40f4-974e-dcfac30d7ef9-var-log-ovn\") pod \"82bb6b73-c388-40f4-974e-dcfac30d7ef9\" (UID: \"82bb6b73-c388-40f4-974e-dcfac30d7ef9\") " Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.753773 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/82bb6b73-c388-40f4-974e-dcfac30d7ef9-additional-scripts\") pod \"82bb6b73-c388-40f4-974e-dcfac30d7ef9\" (UID: \"82bb6b73-c388-40f4-974e-dcfac30d7ef9\") " Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.753787 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46rqr\" (UniqueName: \"kubernetes.io/projected/a7f8123c-1194-4076-80a2-00a8e881e384-kube-api-access-46rqr\") pod \"a7f8123c-1194-4076-80a2-00a8e881e384\" (UID: \"a7f8123c-1194-4076-80a2-00a8e881e384\") " Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.753908 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/82bb6b73-c388-40f4-974e-dcfac30d7ef9-var-run\") pod \"82bb6b73-c388-40f4-974e-dcfac30d7ef9\" (UID: \"82bb6b73-c388-40f4-974e-dcfac30d7ef9\") " Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.754278 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82bb6b73-c388-40f4-974e-dcfac30d7ef9-var-run" (OuterVolumeSpecName: "var-run") pod "82bb6b73-c388-40f4-974e-dcfac30d7ef9" (UID: "82bb6b73-c388-40f4-974e-dcfac30d7ef9"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.755723 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82bb6b73-c388-40f4-974e-dcfac30d7ef9-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "82bb6b73-c388-40f4-974e-dcfac30d7ef9" (UID: "82bb6b73-c388-40f4-974e-dcfac30d7ef9"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.755766 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82bb6b73-c388-40f4-974e-dcfac30d7ef9-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "82bb6b73-c388-40f4-974e-dcfac30d7ef9" (UID: "82bb6b73-c388-40f4-974e-dcfac30d7ef9"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.756059 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7f8123c-1194-4076-80a2-00a8e881e384-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a7f8123c-1194-4076-80a2-00a8e881e384" (UID: "a7f8123c-1194-4076-80a2-00a8e881e384"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.756828 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82bb6b73-c388-40f4-974e-dcfac30d7ef9-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "82bb6b73-c388-40f4-974e-dcfac30d7ef9" (UID: "82bb6b73-c388-40f4-974e-dcfac30d7ef9"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.756942 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82bb6b73-c388-40f4-974e-dcfac30d7ef9-scripts" (OuterVolumeSpecName: "scripts") pod "82bb6b73-c388-40f4-974e-dcfac30d7ef9" (UID: "82bb6b73-c388-40f4-974e-dcfac30d7ef9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.761129 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7f8123c-1194-4076-80a2-00a8e881e384-kube-api-access-46rqr" (OuterVolumeSpecName: "kube-api-access-46rqr") pod "a7f8123c-1194-4076-80a2-00a8e881e384" (UID: "a7f8123c-1194-4076-80a2-00a8e881e384"). InnerVolumeSpecName "kube-api-access-46rqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.765081 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82bb6b73-c388-40f4-974e-dcfac30d7ef9-kube-api-access-7x8qk" (OuterVolumeSpecName: "kube-api-access-7x8qk") pod "82bb6b73-c388-40f4-974e-dcfac30d7ef9" (UID: "82bb6b73-c388-40f4-974e-dcfac30d7ef9"). InnerVolumeSpecName "kube-api-access-7x8qk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:43:07 crc kubenswrapper[4752]: E0122 10:43:07.774472 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="9750781f-e5d3-4106-ac9e-431b017df583" Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.855269 4752 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/82bb6b73-c388-40f4-974e-dcfac30d7ef9-var-run\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.855551 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7x8qk\" (UniqueName: \"kubernetes.io/projected/82bb6b73-c388-40f4-974e-dcfac30d7ef9-kube-api-access-7x8qk\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.855563 4752 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7f8123c-1194-4076-80a2-00a8e881e384-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.855573 4752 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/82bb6b73-c388-40f4-974e-dcfac30d7ef9-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.855582 4752 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82bb6b73-c388-40f4-974e-dcfac30d7ef9-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.855592 4752 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/82bb6b73-c388-40f4-974e-dcfac30d7ef9-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.855599 4752 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/82bb6b73-c388-40f4-974e-dcfac30d7ef9-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.855608 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46rqr\" (UniqueName: \"kubernetes.io/projected/a7f8123c-1194-4076-80a2-00a8e881e384-kube-api-access-46rqr\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.864356 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7b96c9ad-917b-4891-a39f-3f19c92bdd30","Type":"ContainerStarted","Data":"fc34aed05effab0c26abef4153b60ef9746ec78353853936760334ea4430f9b6"} Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.866809 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vpt5c-config-tf276" Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.866807 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vpt5c-config-tf276" event={"ID":"82bb6b73-c388-40f4-974e-dcfac30d7ef9","Type":"ContainerDied","Data":"f8b712a2dedee37d0d68ad86278313cb39f0e9eb59ae0b53a3a82d13ed1381ac"} Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.866843 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8b712a2dedee37d0d68ad86278313cb39f0e9eb59ae0b53a3a82d13ed1381ac" Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.868188 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7ztsl" event={"ID":"a7f8123c-1194-4076-80a2-00a8e881e384","Type":"ContainerDied","Data":"f190a407e45ac14ceea991235475b5bf17828e3fc9bd6b9751171b143b0e5339"} Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.868229 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f190a407e45ac14ceea991235475b5bf17828e3fc9bd6b9751171b143b0e5339" Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.868296 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7ztsl" Jan 22 10:43:07 crc kubenswrapper[4752]: I0122 10:43:07.871196 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9750781f-e5d3-4106-ac9e-431b017df583","Type":"ContainerStarted","Data":"499ee119498e34cd9a429c13d54dc6a2976421d77cf36d0c0dc90a60031d6727"} Jan 22 10:43:08 crc kubenswrapper[4752]: I0122 10:43:08.779061 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-vpt5c-config-tf276"] Jan 22 10:43:08 crc kubenswrapper[4752]: I0122 10:43:08.786476 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-vpt5c-config-tf276"] Jan 22 10:43:08 crc kubenswrapper[4752]: I0122 10:43:08.894195 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7b96c9ad-917b-4891-a39f-3f19c92bdd30","Type":"ContainerStarted","Data":"04299c2fab4eb9ca91087fe58484b0f94d51cb8a7854276c4745b84123d936d7"} Jan 22 10:43:08 crc kubenswrapper[4752]: I0122 10:43:08.894260 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7b96c9ad-917b-4891-a39f-3f19c92bdd30","Type":"ContainerStarted","Data":"b827511dcae0be61c71ddf0ac42a6263d2e05b2e9cddf31a5b3468dc605d4816"} Jan 22 10:43:08 crc kubenswrapper[4752]: I0122 10:43:08.986923 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-vpt5c-config-7g94q"] Jan 22 10:43:08 crc kubenswrapper[4752]: E0122 10:43:08.987335 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82bb6b73-c388-40f4-974e-dcfac30d7ef9" containerName="ovn-config" Jan 22 10:43:08 crc kubenswrapper[4752]: I0122 10:43:08.987349 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="82bb6b73-c388-40f4-974e-dcfac30d7ef9" containerName="ovn-config" Jan 22 10:43:08 crc kubenswrapper[4752]: E0122 10:43:08.987369 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7f8123c-1194-4076-80a2-00a8e881e384" containerName="mariadb-account-create-update" Jan 22 10:43:08 crc kubenswrapper[4752]: I0122 10:43:08.987377 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7f8123c-1194-4076-80a2-00a8e881e384" containerName="mariadb-account-create-update" Jan 22 10:43:08 crc kubenswrapper[4752]: I0122 10:43:08.987523 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7f8123c-1194-4076-80a2-00a8e881e384" containerName="mariadb-account-create-update" Jan 22 10:43:08 crc kubenswrapper[4752]: I0122 10:43:08.987543 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="82bb6b73-c388-40f4-974e-dcfac30d7ef9" containerName="ovn-config" Jan 22 10:43:08 crc kubenswrapper[4752]: I0122 10:43:08.988094 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vpt5c-config-7g94q" Jan 22 10:43:08 crc kubenswrapper[4752]: I0122 10:43:08.993332 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 22 10:43:09 crc kubenswrapper[4752]: I0122 10:43:09.050924 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vpt5c-config-7g94q"] Jan 22 10:43:09 crc kubenswrapper[4752]: I0122 10:43:09.075141 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/19d3190e-ad71-4228-88f7-13b12740ed64-var-log-ovn\") pod \"ovn-controller-vpt5c-config-7g94q\" (UID: \"19d3190e-ad71-4228-88f7-13b12740ed64\") " pod="openstack/ovn-controller-vpt5c-config-7g94q" Jan 22 10:43:09 crc kubenswrapper[4752]: I0122 10:43:09.075214 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/19d3190e-ad71-4228-88f7-13b12740ed64-var-run\") pod \"ovn-controller-vpt5c-config-7g94q\" (UID: \"19d3190e-ad71-4228-88f7-13b12740ed64\") " pod="openstack/ovn-controller-vpt5c-config-7g94q" Jan 22 10:43:09 crc kubenswrapper[4752]: I0122 10:43:09.075231 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19d3190e-ad71-4228-88f7-13b12740ed64-scripts\") pod \"ovn-controller-vpt5c-config-7g94q\" (UID: \"19d3190e-ad71-4228-88f7-13b12740ed64\") " pod="openstack/ovn-controller-vpt5c-config-7g94q" Jan 22 10:43:09 crc kubenswrapper[4752]: I0122 10:43:09.075281 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8vqj\" (UniqueName: \"kubernetes.io/projected/19d3190e-ad71-4228-88f7-13b12740ed64-kube-api-access-w8vqj\") pod \"ovn-controller-vpt5c-config-7g94q\" (UID: \"19d3190e-ad71-4228-88f7-13b12740ed64\") " pod="openstack/ovn-controller-vpt5c-config-7g94q" Jan 22 10:43:09 crc kubenswrapper[4752]: I0122 10:43:09.075318 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/19d3190e-ad71-4228-88f7-13b12740ed64-additional-scripts\") pod \"ovn-controller-vpt5c-config-7g94q\" (UID: \"19d3190e-ad71-4228-88f7-13b12740ed64\") " pod="openstack/ovn-controller-vpt5c-config-7g94q" Jan 22 10:43:09 crc kubenswrapper[4752]: I0122 10:43:09.075341 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/19d3190e-ad71-4228-88f7-13b12740ed64-var-run-ovn\") pod \"ovn-controller-vpt5c-config-7g94q\" (UID: \"19d3190e-ad71-4228-88f7-13b12740ed64\") " pod="openstack/ovn-controller-vpt5c-config-7g94q" Jan 22 10:43:09 crc kubenswrapper[4752]: I0122 10:43:09.109596 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82bb6b73-c388-40f4-974e-dcfac30d7ef9" path="/var/lib/kubelet/pods/82bb6b73-c388-40f4-974e-dcfac30d7ef9/volumes" Jan 22 10:43:09 crc kubenswrapper[4752]: I0122 10:43:09.176888 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8vqj\" (UniqueName: \"kubernetes.io/projected/19d3190e-ad71-4228-88f7-13b12740ed64-kube-api-access-w8vqj\") pod \"ovn-controller-vpt5c-config-7g94q\" (UID: \"19d3190e-ad71-4228-88f7-13b12740ed64\") " pod="openstack/ovn-controller-vpt5c-config-7g94q" Jan 22 10:43:09 crc kubenswrapper[4752]: I0122 10:43:09.177205 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/19d3190e-ad71-4228-88f7-13b12740ed64-additional-scripts\") pod \"ovn-controller-vpt5c-config-7g94q\" (UID: \"19d3190e-ad71-4228-88f7-13b12740ed64\") " pod="openstack/ovn-controller-vpt5c-config-7g94q" Jan 22 10:43:09 crc kubenswrapper[4752]: I0122 10:43:09.177234 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/19d3190e-ad71-4228-88f7-13b12740ed64-var-run-ovn\") pod \"ovn-controller-vpt5c-config-7g94q\" (UID: \"19d3190e-ad71-4228-88f7-13b12740ed64\") " pod="openstack/ovn-controller-vpt5c-config-7g94q" Jan 22 10:43:09 crc kubenswrapper[4752]: I0122 10:43:09.177309 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/19d3190e-ad71-4228-88f7-13b12740ed64-var-log-ovn\") pod \"ovn-controller-vpt5c-config-7g94q\" (UID: \"19d3190e-ad71-4228-88f7-13b12740ed64\") " pod="openstack/ovn-controller-vpt5c-config-7g94q" Jan 22 10:43:09 crc kubenswrapper[4752]: I0122 10:43:09.177336 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/19d3190e-ad71-4228-88f7-13b12740ed64-var-run\") pod \"ovn-controller-vpt5c-config-7g94q\" (UID: \"19d3190e-ad71-4228-88f7-13b12740ed64\") " pod="openstack/ovn-controller-vpt5c-config-7g94q" Jan 22 10:43:09 crc kubenswrapper[4752]: I0122 10:43:09.177351 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19d3190e-ad71-4228-88f7-13b12740ed64-scripts\") pod \"ovn-controller-vpt5c-config-7g94q\" (UID: \"19d3190e-ad71-4228-88f7-13b12740ed64\") " pod="openstack/ovn-controller-vpt5c-config-7g94q" Jan 22 10:43:09 crc kubenswrapper[4752]: I0122 10:43:09.177716 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/19d3190e-ad71-4228-88f7-13b12740ed64-var-run-ovn\") pod \"ovn-controller-vpt5c-config-7g94q\" (UID: \"19d3190e-ad71-4228-88f7-13b12740ed64\") " pod="openstack/ovn-controller-vpt5c-config-7g94q" Jan 22 10:43:09 crc kubenswrapper[4752]: I0122 10:43:09.177831 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/19d3190e-ad71-4228-88f7-13b12740ed64-var-log-ovn\") pod \"ovn-controller-vpt5c-config-7g94q\" (UID: \"19d3190e-ad71-4228-88f7-13b12740ed64\") " pod="openstack/ovn-controller-vpt5c-config-7g94q" Jan 22 10:43:09 crc kubenswrapper[4752]: I0122 10:43:09.177981 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/19d3190e-ad71-4228-88f7-13b12740ed64-var-run\") pod \"ovn-controller-vpt5c-config-7g94q\" (UID: \"19d3190e-ad71-4228-88f7-13b12740ed64\") " pod="openstack/ovn-controller-vpt5c-config-7g94q" Jan 22 10:43:09 crc kubenswrapper[4752]: I0122 10:43:09.178230 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/19d3190e-ad71-4228-88f7-13b12740ed64-additional-scripts\") pod \"ovn-controller-vpt5c-config-7g94q\" (UID: \"19d3190e-ad71-4228-88f7-13b12740ed64\") " pod="openstack/ovn-controller-vpt5c-config-7g94q" Jan 22 10:43:09 crc kubenswrapper[4752]: I0122 10:43:09.179195 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19d3190e-ad71-4228-88f7-13b12740ed64-scripts\") pod \"ovn-controller-vpt5c-config-7g94q\" (UID: \"19d3190e-ad71-4228-88f7-13b12740ed64\") " pod="openstack/ovn-controller-vpt5c-config-7g94q" Jan 22 10:43:09 crc kubenswrapper[4752]: I0122 10:43:09.200427 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8vqj\" (UniqueName: \"kubernetes.io/projected/19d3190e-ad71-4228-88f7-13b12740ed64-kube-api-access-w8vqj\") pod \"ovn-controller-vpt5c-config-7g94q\" (UID: \"19d3190e-ad71-4228-88f7-13b12740ed64\") " pod="openstack/ovn-controller-vpt5c-config-7g94q" Jan 22 10:43:09 crc kubenswrapper[4752]: I0122 10:43:09.303310 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vpt5c-config-7g94q" Jan 22 10:43:09 crc kubenswrapper[4752]: I0122 10:43:09.665083 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vpt5c-config-7g94q"] Jan 22 10:43:09 crc kubenswrapper[4752]: I0122 10:43:09.876434 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-7ztsl"] Jan 22 10:43:09 crc kubenswrapper[4752]: I0122 10:43:09.887702 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-7ztsl"] Jan 22 10:43:09 crc kubenswrapper[4752]: I0122 10:43:09.909585 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c69c7c08-1b8d-43ad-aad3-bb292f64ad86","Type":"ContainerStarted","Data":"3d1837e9a88d8a097e2bf1e73ce5699ddf0998f3f4ccc655dd1f0a84473f54f3"} Jan 22 10:43:09 crc kubenswrapper[4752]: I0122 10:43:09.909921 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 22 10:43:09 crc kubenswrapper[4752]: I0122 10:43:09.912765 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vpt5c-config-7g94q" event={"ID":"19d3190e-ad71-4228-88f7-13b12740ed64","Type":"ContainerStarted","Data":"0598f97dbf389fbd4440eb39cdf51f7a494f5dea117e45d6bc4673959b88d1ee"} Jan 22 10:43:09 crc kubenswrapper[4752]: I0122 10:43:09.921625 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7b96c9ad-917b-4891-a39f-3f19c92bdd30","Type":"ContainerStarted","Data":"1e22391b568e6066aefe85a7b3d893ed5691203a8f9651f03c09d0720063f956"} Jan 22 10:43:09 crc kubenswrapper[4752]: I0122 10:43:09.921686 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7b96c9ad-917b-4891-a39f-3f19c92bdd30","Type":"ContainerStarted","Data":"35a1fb8e1f037d6277090823ca1e1c06246c84a22cdd4526536e74366994c3b9"} Jan 22 10:43:09 crc kubenswrapper[4752]: I0122 10:43:09.921697 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7b96c9ad-917b-4891-a39f-3f19c92bdd30","Type":"ContainerStarted","Data":"62e055b8639bef50a9883b9995f1a7f442d90a568b1432c2fb80bb5ecb35be1d"} Jan 22 10:43:09 crc kubenswrapper[4752]: I0122 10:43:09.951268 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.352363398 podStartE2EDuration="33.951240837s" podCreationTimestamp="2026-01-22 10:42:36 +0000 UTC" firstStartedPulling="2026-01-22 10:42:37.289073482 +0000 UTC m=+1036.519016390" lastFinishedPulling="2026-01-22 10:43:08.887950921 +0000 UTC m=+1068.117893829" observedRunningTime="2026-01-22 10:43:09.938192386 +0000 UTC m=+1069.168135304" watchObservedRunningTime="2026-01-22 10:43:09.951240837 +0000 UTC m=+1069.181183745" Jan 22 10:43:10 crc kubenswrapper[4752]: I0122 10:43:10.935890 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9750781f-e5d3-4106-ac9e-431b017df583","Type":"ContainerStarted","Data":"6eeddeb49a6a8fbc746ba234786d11d42995c02d703c5600974ffe9294ce6443"} Jan 22 10:43:10 crc kubenswrapper[4752]: I0122 10:43:10.944222 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7b96c9ad-917b-4891-a39f-3f19c92bdd30","Type":"ContainerStarted","Data":"4f354f1b78e2bed1dd321e11a134009bc26c2f9a3ab2bb64e9598ec0d6e2c58a"} Jan 22 10:43:10 crc kubenswrapper[4752]: I0122 10:43:10.946233 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vpt5c-config-7g94q" event={"ID":"19d3190e-ad71-4228-88f7-13b12740ed64","Type":"ContainerStarted","Data":"ddc4ea8cf03a8ec374f0dcc5200d9e2a316ea1abc0dec9358dd822a960e1017b"} Jan 22 10:43:10 crc kubenswrapper[4752]: I0122 10:43:10.972196 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=13.558594653 podStartE2EDuration="1m17.972174625s" podCreationTimestamp="2026-01-22 10:41:53 +0000 UTC" firstStartedPulling="2026-01-22 10:42:05.304111875 +0000 UTC m=+1004.534054793" lastFinishedPulling="2026-01-22 10:43:09.717691857 +0000 UTC m=+1068.947634765" observedRunningTime="2026-01-22 10:43:10.96512652 +0000 UTC m=+1070.195069428" watchObservedRunningTime="2026-01-22 10:43:10.972174625 +0000 UTC m=+1070.202117533" Jan 22 10:43:11 crc kubenswrapper[4752]: I0122 10:43:11.006280 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=45.044435012 podStartE2EDuration="58.006264746s" podCreationTimestamp="2026-01-22 10:42:13 +0000 UTC" firstStartedPulling="2026-01-22 10:42:54.565041261 +0000 UTC m=+1053.794984169" lastFinishedPulling="2026-01-22 10:43:07.526870995 +0000 UTC m=+1066.756813903" observedRunningTime="2026-01-22 10:43:11.005951008 +0000 UTC m=+1070.235893916" watchObservedRunningTime="2026-01-22 10:43:11.006264746 +0000 UTC m=+1070.236207654" Jan 22 10:43:11 crc kubenswrapper[4752]: I0122 10:43:11.037341 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-vpt5c-config-7g94q" podStartSLOduration=3.037320219 podStartE2EDuration="3.037320219s" podCreationTimestamp="2026-01-22 10:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:43:11.032479572 +0000 UTC m=+1070.262422490" watchObservedRunningTime="2026-01-22 10:43:11.037320219 +0000 UTC m=+1070.267263127" Jan 22 10:43:11 crc kubenswrapper[4752]: I0122 10:43:11.110737 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7f8123c-1194-4076-80a2-00a8e881e384" path="/var/lib/kubelet/pods/a7f8123c-1194-4076-80a2-00a8e881e384/volumes" Jan 22 10:43:11 crc kubenswrapper[4752]: I0122 10:43:11.329083 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-946dbfbcf-dtqkt"] Jan 22 10:43:11 crc kubenswrapper[4752]: I0122 10:43:11.330754 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-946dbfbcf-dtqkt" Jan 22 10:43:11 crc kubenswrapper[4752]: I0122 10:43:11.335368 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 22 10:43:11 crc kubenswrapper[4752]: I0122 10:43:11.342326 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-946dbfbcf-dtqkt"] Jan 22 10:43:11 crc kubenswrapper[4752]: I0122 10:43:11.462803 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/194c60dd-8bd6-45e5-9e65-62efa4215dd9-ovsdbserver-sb\") pod \"dnsmasq-dns-946dbfbcf-dtqkt\" (UID: \"194c60dd-8bd6-45e5-9e65-62efa4215dd9\") " pod="openstack/dnsmasq-dns-946dbfbcf-dtqkt" Jan 22 10:43:11 crc kubenswrapper[4752]: I0122 10:43:11.463107 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/194c60dd-8bd6-45e5-9e65-62efa4215dd9-dns-svc\") pod \"dnsmasq-dns-946dbfbcf-dtqkt\" (UID: \"194c60dd-8bd6-45e5-9e65-62efa4215dd9\") " pod="openstack/dnsmasq-dns-946dbfbcf-dtqkt" Jan 22 10:43:11 crc kubenswrapper[4752]: I0122 10:43:11.463162 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44qsf\" (UniqueName: \"kubernetes.io/projected/194c60dd-8bd6-45e5-9e65-62efa4215dd9-kube-api-access-44qsf\") pod \"dnsmasq-dns-946dbfbcf-dtqkt\" (UID: \"194c60dd-8bd6-45e5-9e65-62efa4215dd9\") " pod="openstack/dnsmasq-dns-946dbfbcf-dtqkt" Jan 22 10:43:11 crc kubenswrapper[4752]: I0122 10:43:11.463291 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/194c60dd-8bd6-45e5-9e65-62efa4215dd9-config\") pod \"dnsmasq-dns-946dbfbcf-dtqkt\" (UID: \"194c60dd-8bd6-45e5-9e65-62efa4215dd9\") " pod="openstack/dnsmasq-dns-946dbfbcf-dtqkt" Jan 22 10:43:11 crc kubenswrapper[4752]: I0122 10:43:11.463330 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/194c60dd-8bd6-45e5-9e65-62efa4215dd9-dns-swift-storage-0\") pod \"dnsmasq-dns-946dbfbcf-dtqkt\" (UID: \"194c60dd-8bd6-45e5-9e65-62efa4215dd9\") " pod="openstack/dnsmasq-dns-946dbfbcf-dtqkt" Jan 22 10:43:11 crc kubenswrapper[4752]: I0122 10:43:11.463370 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/194c60dd-8bd6-45e5-9e65-62efa4215dd9-ovsdbserver-nb\") pod \"dnsmasq-dns-946dbfbcf-dtqkt\" (UID: \"194c60dd-8bd6-45e5-9e65-62efa4215dd9\") " pod="openstack/dnsmasq-dns-946dbfbcf-dtqkt" Jan 22 10:43:11 crc kubenswrapper[4752]: I0122 10:43:11.564379 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/194c60dd-8bd6-45e5-9e65-62efa4215dd9-dns-svc\") pod \"dnsmasq-dns-946dbfbcf-dtqkt\" (UID: \"194c60dd-8bd6-45e5-9e65-62efa4215dd9\") " pod="openstack/dnsmasq-dns-946dbfbcf-dtqkt" Jan 22 10:43:11 crc kubenswrapper[4752]: I0122 10:43:11.564425 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44qsf\" (UniqueName: \"kubernetes.io/projected/194c60dd-8bd6-45e5-9e65-62efa4215dd9-kube-api-access-44qsf\") pod \"dnsmasq-dns-946dbfbcf-dtqkt\" (UID: \"194c60dd-8bd6-45e5-9e65-62efa4215dd9\") " pod="openstack/dnsmasq-dns-946dbfbcf-dtqkt" Jan 22 10:43:11 crc kubenswrapper[4752]: I0122 10:43:11.564456 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/194c60dd-8bd6-45e5-9e65-62efa4215dd9-config\") pod \"dnsmasq-dns-946dbfbcf-dtqkt\" (UID: \"194c60dd-8bd6-45e5-9e65-62efa4215dd9\") " pod="openstack/dnsmasq-dns-946dbfbcf-dtqkt" Jan 22 10:43:11 crc kubenswrapper[4752]: I0122 10:43:11.564474 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/194c60dd-8bd6-45e5-9e65-62efa4215dd9-dns-swift-storage-0\") pod \"dnsmasq-dns-946dbfbcf-dtqkt\" (UID: \"194c60dd-8bd6-45e5-9e65-62efa4215dd9\") " pod="openstack/dnsmasq-dns-946dbfbcf-dtqkt" Jan 22 10:43:11 crc kubenswrapper[4752]: I0122 10:43:11.564497 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/194c60dd-8bd6-45e5-9e65-62efa4215dd9-ovsdbserver-nb\") pod \"dnsmasq-dns-946dbfbcf-dtqkt\" (UID: \"194c60dd-8bd6-45e5-9e65-62efa4215dd9\") " pod="openstack/dnsmasq-dns-946dbfbcf-dtqkt" Jan 22 10:43:11 crc kubenswrapper[4752]: I0122 10:43:11.564565 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/194c60dd-8bd6-45e5-9e65-62efa4215dd9-ovsdbserver-sb\") pod \"dnsmasq-dns-946dbfbcf-dtqkt\" (UID: \"194c60dd-8bd6-45e5-9e65-62efa4215dd9\") " pod="openstack/dnsmasq-dns-946dbfbcf-dtqkt" Jan 22 10:43:11 crc kubenswrapper[4752]: I0122 10:43:11.565564 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/194c60dd-8bd6-45e5-9e65-62efa4215dd9-ovsdbserver-sb\") pod \"dnsmasq-dns-946dbfbcf-dtqkt\" (UID: \"194c60dd-8bd6-45e5-9e65-62efa4215dd9\") " pod="openstack/dnsmasq-dns-946dbfbcf-dtqkt" Jan 22 10:43:11 crc kubenswrapper[4752]: I0122 10:43:11.565726 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/194c60dd-8bd6-45e5-9e65-62efa4215dd9-config\") pod \"dnsmasq-dns-946dbfbcf-dtqkt\" (UID: \"194c60dd-8bd6-45e5-9e65-62efa4215dd9\") " pod="openstack/dnsmasq-dns-946dbfbcf-dtqkt" Jan 22 10:43:11 crc kubenswrapper[4752]: I0122 10:43:11.566108 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/194c60dd-8bd6-45e5-9e65-62efa4215dd9-dns-swift-storage-0\") pod \"dnsmasq-dns-946dbfbcf-dtqkt\" (UID: \"194c60dd-8bd6-45e5-9e65-62efa4215dd9\") " pod="openstack/dnsmasq-dns-946dbfbcf-dtqkt" Jan 22 10:43:11 crc kubenswrapper[4752]: I0122 10:43:11.566131 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/194c60dd-8bd6-45e5-9e65-62efa4215dd9-dns-svc\") pod \"dnsmasq-dns-946dbfbcf-dtqkt\" (UID: \"194c60dd-8bd6-45e5-9e65-62efa4215dd9\") " pod="openstack/dnsmasq-dns-946dbfbcf-dtqkt" Jan 22 10:43:11 crc kubenswrapper[4752]: I0122 10:43:11.566609 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/194c60dd-8bd6-45e5-9e65-62efa4215dd9-ovsdbserver-nb\") pod \"dnsmasq-dns-946dbfbcf-dtqkt\" (UID: \"194c60dd-8bd6-45e5-9e65-62efa4215dd9\") " pod="openstack/dnsmasq-dns-946dbfbcf-dtqkt" Jan 22 10:43:11 crc kubenswrapper[4752]: I0122 10:43:11.584073 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44qsf\" (UniqueName: \"kubernetes.io/projected/194c60dd-8bd6-45e5-9e65-62efa4215dd9-kube-api-access-44qsf\") pod \"dnsmasq-dns-946dbfbcf-dtqkt\" (UID: \"194c60dd-8bd6-45e5-9e65-62efa4215dd9\") " pod="openstack/dnsmasq-dns-946dbfbcf-dtqkt" Jan 22 10:43:11 crc kubenswrapper[4752]: I0122 10:43:11.647166 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-946dbfbcf-dtqkt" Jan 22 10:43:12 crc kubenswrapper[4752]: I0122 10:43:12.344921 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-946dbfbcf-dtqkt"] Jan 22 10:43:12 crc kubenswrapper[4752]: I0122 10:43:12.964243 4752 generic.go:334] "Generic (PLEG): container finished" podID="19d3190e-ad71-4228-88f7-13b12740ed64" containerID="ddc4ea8cf03a8ec374f0dcc5200d9e2a316ea1abc0dec9358dd822a960e1017b" exitCode=0 Jan 22 10:43:12 crc kubenswrapper[4752]: I0122 10:43:12.964351 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vpt5c-config-7g94q" event={"ID":"19d3190e-ad71-4228-88f7-13b12740ed64","Type":"ContainerDied","Data":"ddc4ea8cf03a8ec374f0dcc5200d9e2a316ea1abc0dec9358dd822a960e1017b"} Jan 22 10:43:12 crc kubenswrapper[4752]: I0122 10:43:12.966541 4752 generic.go:334] "Generic (PLEG): container finished" podID="194c60dd-8bd6-45e5-9e65-62efa4215dd9" containerID="1227fa555540dda5b67b93c99e723f990358ced7bce6284e47986cb91db3f954" exitCode=0 Jan 22 10:43:12 crc kubenswrapper[4752]: I0122 10:43:12.966644 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-946dbfbcf-dtqkt" event={"ID":"194c60dd-8bd6-45e5-9e65-62efa4215dd9","Type":"ContainerDied","Data":"1227fa555540dda5b67b93c99e723f990358ced7bce6284e47986cb91db3f954"} Jan 22 10:43:12 crc kubenswrapper[4752]: I0122 10:43:12.966806 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-946dbfbcf-dtqkt" event={"ID":"194c60dd-8bd6-45e5-9e65-62efa4215dd9","Type":"ContainerStarted","Data":"f2e25ff2744eddece962a4160f1125c4aaa1bc8a63e29d07a556a905bedb7194"} Jan 22 10:43:13 crc kubenswrapper[4752]: I0122 10:43:13.498164 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-tnzlz"] Jan 22 10:43:13 crc kubenswrapper[4752]: I0122 10:43:13.499485 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tnzlz" Jan 22 10:43:13 crc kubenswrapper[4752]: I0122 10:43:13.501255 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 22 10:43:13 crc kubenswrapper[4752]: I0122 10:43:13.512424 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-tnzlz"] Jan 22 10:43:13 crc kubenswrapper[4752]: I0122 10:43:13.597904 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8-operator-scripts\") pod \"root-account-create-update-tnzlz\" (UID: \"b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8\") " pod="openstack/root-account-create-update-tnzlz" Jan 22 10:43:13 crc kubenswrapper[4752]: I0122 10:43:13.597968 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdxbp\" (UniqueName: \"kubernetes.io/projected/b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8-kube-api-access-tdxbp\") pod \"root-account-create-update-tnzlz\" (UID: \"b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8\") " pod="openstack/root-account-create-update-tnzlz" Jan 22 10:43:13 crc kubenswrapper[4752]: I0122 10:43:13.699384 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8-operator-scripts\") pod \"root-account-create-update-tnzlz\" (UID: \"b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8\") " pod="openstack/root-account-create-update-tnzlz" Jan 22 10:43:13 crc kubenswrapper[4752]: I0122 10:43:13.699459 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdxbp\" (UniqueName: \"kubernetes.io/projected/b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8-kube-api-access-tdxbp\") pod \"root-account-create-update-tnzlz\" (UID: \"b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8\") " pod="openstack/root-account-create-update-tnzlz" Jan 22 10:43:13 crc kubenswrapper[4752]: I0122 10:43:13.700661 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8-operator-scripts\") pod \"root-account-create-update-tnzlz\" (UID: \"b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8\") " pod="openstack/root-account-create-update-tnzlz" Jan 22 10:43:13 crc kubenswrapper[4752]: I0122 10:43:13.717914 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdxbp\" (UniqueName: \"kubernetes.io/projected/b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8-kube-api-access-tdxbp\") pod \"root-account-create-update-tnzlz\" (UID: \"b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8\") " pod="openstack/root-account-create-update-tnzlz" Jan 22 10:43:13 crc kubenswrapper[4752]: I0122 10:43:13.818088 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tnzlz" Jan 22 10:43:14 crc kubenswrapper[4752]: I0122 10:43:14.024594 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-946dbfbcf-dtqkt" event={"ID":"194c60dd-8bd6-45e5-9e65-62efa4215dd9","Type":"ContainerStarted","Data":"ed01d30ec393d9c5c34f64597657744e62d121715e967951df0786cd8e15ce0a"} Jan 22 10:43:14 crc kubenswrapper[4752]: I0122 10:43:14.024833 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-946dbfbcf-dtqkt" Jan 22 10:43:14 crc kubenswrapper[4752]: I0122 10:43:14.078288 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-946dbfbcf-dtqkt" podStartSLOduration=3.078266641 podStartE2EDuration="3.078266641s" podCreationTimestamp="2026-01-22 10:43:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:43:14.061214554 +0000 UTC m=+1073.291157462" watchObservedRunningTime="2026-01-22 10:43:14.078266641 +0000 UTC m=+1073.308209549" Jan 22 10:43:14 crc kubenswrapper[4752]: W0122 10:43:14.182930 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8aa6ec4_3fcf_44bb_ba70_b5a565a7f5c8.slice/crio-188175561fcb33ad42cd5cd218c2f40b39495a86dcefcaa966c23d8df725e852 WatchSource:0}: Error finding container 188175561fcb33ad42cd5cd218c2f40b39495a86dcefcaa966c23d8df725e852: Status 404 returned error can't find the container with id 188175561fcb33ad42cd5cd218c2f40b39495a86dcefcaa966c23d8df725e852 Jan 22 10:43:14 crc kubenswrapper[4752]: I0122 10:43:14.198800 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-tnzlz"] Jan 22 10:43:14 crc kubenswrapper[4752]: I0122 10:43:14.378308 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vpt5c-config-7g94q" Jan 22 10:43:14 crc kubenswrapper[4752]: I0122 10:43:14.522758 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/19d3190e-ad71-4228-88f7-13b12740ed64-var-log-ovn\") pod \"19d3190e-ad71-4228-88f7-13b12740ed64\" (UID: \"19d3190e-ad71-4228-88f7-13b12740ed64\") " Jan 22 10:43:14 crc kubenswrapper[4752]: I0122 10:43:14.522824 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19d3190e-ad71-4228-88f7-13b12740ed64-scripts\") pod \"19d3190e-ad71-4228-88f7-13b12740ed64\" (UID: \"19d3190e-ad71-4228-88f7-13b12740ed64\") " Jan 22 10:43:14 crc kubenswrapper[4752]: I0122 10:43:14.522845 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/19d3190e-ad71-4228-88f7-13b12740ed64-additional-scripts\") pod \"19d3190e-ad71-4228-88f7-13b12740ed64\" (UID: \"19d3190e-ad71-4228-88f7-13b12740ed64\") " Jan 22 10:43:14 crc kubenswrapper[4752]: I0122 10:43:14.522965 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/19d3190e-ad71-4228-88f7-13b12740ed64-var-run\") pod \"19d3190e-ad71-4228-88f7-13b12740ed64\" (UID: \"19d3190e-ad71-4228-88f7-13b12740ed64\") " Jan 22 10:43:14 crc kubenswrapper[4752]: I0122 10:43:14.523024 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8vqj\" (UniqueName: \"kubernetes.io/projected/19d3190e-ad71-4228-88f7-13b12740ed64-kube-api-access-w8vqj\") pod \"19d3190e-ad71-4228-88f7-13b12740ed64\" (UID: \"19d3190e-ad71-4228-88f7-13b12740ed64\") " Jan 22 10:43:14 crc kubenswrapper[4752]: I0122 10:43:14.523042 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/19d3190e-ad71-4228-88f7-13b12740ed64-var-run-ovn\") pod \"19d3190e-ad71-4228-88f7-13b12740ed64\" (UID: \"19d3190e-ad71-4228-88f7-13b12740ed64\") " Jan 22 10:43:14 crc kubenswrapper[4752]: I0122 10:43:14.523070 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19d3190e-ad71-4228-88f7-13b12740ed64-var-run" (OuterVolumeSpecName: "var-run") pod "19d3190e-ad71-4228-88f7-13b12740ed64" (UID: "19d3190e-ad71-4228-88f7-13b12740ed64"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:43:14 crc kubenswrapper[4752]: I0122 10:43:14.523068 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19d3190e-ad71-4228-88f7-13b12740ed64-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "19d3190e-ad71-4228-88f7-13b12740ed64" (UID: "19d3190e-ad71-4228-88f7-13b12740ed64"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:43:14 crc kubenswrapper[4752]: I0122 10:43:14.523214 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19d3190e-ad71-4228-88f7-13b12740ed64-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "19d3190e-ad71-4228-88f7-13b12740ed64" (UID: "19d3190e-ad71-4228-88f7-13b12740ed64"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:43:14 crc kubenswrapper[4752]: I0122 10:43:14.523427 4752 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/19d3190e-ad71-4228-88f7-13b12740ed64-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:14 crc kubenswrapper[4752]: I0122 10:43:14.523440 4752 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/19d3190e-ad71-4228-88f7-13b12740ed64-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:14 crc kubenswrapper[4752]: I0122 10:43:14.523448 4752 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/19d3190e-ad71-4228-88f7-13b12740ed64-var-run\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:14 crc kubenswrapper[4752]: I0122 10:43:14.524161 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19d3190e-ad71-4228-88f7-13b12740ed64-scripts" (OuterVolumeSpecName: "scripts") pod "19d3190e-ad71-4228-88f7-13b12740ed64" (UID: "19d3190e-ad71-4228-88f7-13b12740ed64"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:43:14 crc kubenswrapper[4752]: I0122 10:43:14.524331 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19d3190e-ad71-4228-88f7-13b12740ed64-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "19d3190e-ad71-4228-88f7-13b12740ed64" (UID: "19d3190e-ad71-4228-88f7-13b12740ed64"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:43:14 crc kubenswrapper[4752]: I0122 10:43:14.527520 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19d3190e-ad71-4228-88f7-13b12740ed64-kube-api-access-w8vqj" (OuterVolumeSpecName: "kube-api-access-w8vqj") pod "19d3190e-ad71-4228-88f7-13b12740ed64" (UID: "19d3190e-ad71-4228-88f7-13b12740ed64"). InnerVolumeSpecName "kube-api-access-w8vqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:43:14 crc kubenswrapper[4752]: I0122 10:43:14.625935 4752 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19d3190e-ad71-4228-88f7-13b12740ed64-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:14 crc kubenswrapper[4752]: I0122 10:43:14.625973 4752 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/19d3190e-ad71-4228-88f7-13b12740ed64-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:14 crc kubenswrapper[4752]: I0122 10:43:14.625984 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8vqj\" (UniqueName: \"kubernetes.io/projected/19d3190e-ad71-4228-88f7-13b12740ed64-kube-api-access-w8vqj\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:14 crc kubenswrapper[4752]: I0122 10:43:14.786389 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:15 crc kubenswrapper[4752]: I0122 10:43:15.033530 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vpt5c-config-7g94q" Jan 22 10:43:15 crc kubenswrapper[4752]: I0122 10:43:15.033546 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vpt5c-config-7g94q" event={"ID":"19d3190e-ad71-4228-88f7-13b12740ed64","Type":"ContainerDied","Data":"0598f97dbf389fbd4440eb39cdf51f7a494f5dea117e45d6bc4673959b88d1ee"} Jan 22 10:43:15 crc kubenswrapper[4752]: I0122 10:43:15.033608 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0598f97dbf389fbd4440eb39cdf51f7a494f5dea117e45d6bc4673959b88d1ee" Jan 22 10:43:15 crc kubenswrapper[4752]: I0122 10:43:15.035214 4752 generic.go:334] "Generic (PLEG): container finished" podID="b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8" containerID="1cdce13328688e01b7341da8ed77283903647fcf2bdf11e0cd49eedafbe92ecf" exitCode=0 Jan 22 10:43:15 crc kubenswrapper[4752]: I0122 10:43:15.035495 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tnzlz" event={"ID":"b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8","Type":"ContainerDied","Data":"1cdce13328688e01b7341da8ed77283903647fcf2bdf11e0cd49eedafbe92ecf"} Jan 22 10:43:15 crc kubenswrapper[4752]: I0122 10:43:15.035529 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tnzlz" event={"ID":"b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8","Type":"ContainerStarted","Data":"188175561fcb33ad42cd5cd218c2f40b39495a86dcefcaa966c23d8df725e852"} Jan 22 10:43:15 crc kubenswrapper[4752]: I0122 10:43:15.448333 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-vpt5c-config-7g94q"] Jan 22 10:43:15 crc kubenswrapper[4752]: I0122 10:43:15.453611 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-vpt5c-config-7g94q"] Jan 22 10:43:16 crc kubenswrapper[4752]: I0122 10:43:16.374882 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tnzlz" Jan 22 10:43:16 crc kubenswrapper[4752]: I0122 10:43:16.468331 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdxbp\" (UniqueName: \"kubernetes.io/projected/b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8-kube-api-access-tdxbp\") pod \"b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8\" (UID: \"b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8\") " Jan 22 10:43:16 crc kubenswrapper[4752]: I0122 10:43:16.468454 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8-operator-scripts\") pod \"b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8\" (UID: \"b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8\") " Jan 22 10:43:16 crc kubenswrapper[4752]: I0122 10:43:16.468995 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8" (UID: "b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:43:16 crc kubenswrapper[4752]: I0122 10:43:16.474055 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8-kube-api-access-tdxbp" (OuterVolumeSpecName: "kube-api-access-tdxbp") pod "b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8" (UID: "b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8"). InnerVolumeSpecName "kube-api-access-tdxbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:43:16 crc kubenswrapper[4752]: I0122 10:43:16.570689 4752 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:16 crc kubenswrapper[4752]: I0122 10:43:16.570733 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdxbp\" (UniqueName: \"kubernetes.io/projected/b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8-kube-api-access-tdxbp\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:17 crc kubenswrapper[4752]: I0122 10:43:17.051640 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tnzlz" event={"ID":"b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8","Type":"ContainerDied","Data":"188175561fcb33ad42cd5cd218c2f40b39495a86dcefcaa966c23d8df725e852"} Jan 22 10:43:17 crc kubenswrapper[4752]: I0122 10:43:17.051940 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="188175561fcb33ad42cd5cd218c2f40b39495a86dcefcaa966c23d8df725e852" Jan 22 10:43:17 crc kubenswrapper[4752]: I0122 10:43:17.051748 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tnzlz" Jan 22 10:43:17 crc kubenswrapper[4752]: I0122 10:43:17.108964 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19d3190e-ad71-4228-88f7-13b12740ed64" path="/var/lib/kubelet/pods/19d3190e-ad71-4228-88f7-13b12740ed64/volumes" Jan 22 10:43:17 crc kubenswrapper[4752]: I0122 10:43:17.736012 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 22 10:43:17 crc kubenswrapper[4752]: I0122 10:43:17.973027 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="9356406a-3c6e-4af1-a8bb-92244286ba39" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.199668 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-vbsj2"] Jan 22 10:43:18 crc kubenswrapper[4752]: E0122 10:43:18.200246 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19d3190e-ad71-4228-88f7-13b12740ed64" containerName="ovn-config" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.200262 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="19d3190e-ad71-4228-88f7-13b12740ed64" containerName="ovn-config" Jan 22 10:43:18 crc kubenswrapper[4752]: E0122 10:43:18.200293 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8" containerName="mariadb-account-create-update" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.200299 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8" containerName="mariadb-account-create-update" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.200447 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="19d3190e-ad71-4228-88f7-13b12740ed64" containerName="ovn-config" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.200473 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8" containerName="mariadb-account-create-update" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.201099 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-vbsj2" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.206220 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.206333 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-86g9j" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.240451 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-vbsj2"] Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.246891 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-pc4tl"] Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.247971 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pc4tl" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.253111 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-pc4tl"] Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.271837 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-notifications-server-0" podUID="76dee6bc-ab39-4f6c-bc31-6ef18020e5f3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.324098 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a1192a1d-8861-4ce2-bfee-1360fecff6e7-db-sync-config-data\") pod \"watcher-db-sync-vbsj2\" (UID: \"a1192a1d-8861-4ce2-bfee-1360fecff6e7\") " pod="openstack/watcher-db-sync-vbsj2" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.324179 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8frrs\" (UniqueName: \"kubernetes.io/projected/a1192a1d-8861-4ce2-bfee-1360fecff6e7-kube-api-access-8frrs\") pod \"watcher-db-sync-vbsj2\" (UID: \"a1192a1d-8861-4ce2-bfee-1360fecff6e7\") " pod="openstack/watcher-db-sync-vbsj2" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.324266 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1192a1d-8861-4ce2-bfee-1360fecff6e7-config-data\") pod \"watcher-db-sync-vbsj2\" (UID: \"a1192a1d-8861-4ce2-bfee-1360fecff6e7\") " pod="openstack/watcher-db-sync-vbsj2" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.324425 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1192a1d-8861-4ce2-bfee-1360fecff6e7-combined-ca-bundle\") pod \"watcher-db-sync-vbsj2\" (UID: \"a1192a1d-8861-4ce2-bfee-1360fecff6e7\") " pod="openstack/watcher-db-sync-vbsj2" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.333994 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-f88d-account-create-update-8pd96"] Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.334959 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f88d-account-create-update-8pd96" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.340478 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.350592 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-f88d-account-create-update-8pd96"] Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.423572 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-sq5kv"] Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.424871 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-sq5kv" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.425871 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1192a1d-8861-4ce2-bfee-1360fecff6e7-combined-ca-bundle\") pod \"watcher-db-sync-vbsj2\" (UID: \"a1192a1d-8861-4ce2-bfee-1360fecff6e7\") " pod="openstack/watcher-db-sync-vbsj2" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.425938 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a1192a1d-8861-4ce2-bfee-1360fecff6e7-db-sync-config-data\") pod \"watcher-db-sync-vbsj2\" (UID: \"a1192a1d-8861-4ce2-bfee-1360fecff6e7\") " pod="openstack/watcher-db-sync-vbsj2" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.425996 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xdjs\" (UniqueName: \"kubernetes.io/projected/304e6123-7015-427f-a6bd-d950c0e6c7d3-kube-api-access-9xdjs\") pod \"cinder-db-create-pc4tl\" (UID: \"304e6123-7015-427f-a6bd-d950c0e6c7d3\") " pod="openstack/cinder-db-create-pc4tl" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.426029 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8frrs\" (UniqueName: \"kubernetes.io/projected/a1192a1d-8861-4ce2-bfee-1360fecff6e7-kube-api-access-8frrs\") pod \"watcher-db-sync-vbsj2\" (UID: \"a1192a1d-8861-4ce2-bfee-1360fecff6e7\") " pod="openstack/watcher-db-sync-vbsj2" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.426052 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zxmm\" (UniqueName: \"kubernetes.io/projected/d82c4b09-addc-4fb4-97ab-aa791a082372-kube-api-access-9zxmm\") pod \"cinder-f88d-account-create-update-8pd96\" (UID: \"d82c4b09-addc-4fb4-97ab-aa791a082372\") " pod="openstack/cinder-f88d-account-create-update-8pd96" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.426142 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/304e6123-7015-427f-a6bd-d950c0e6c7d3-operator-scripts\") pod \"cinder-db-create-pc4tl\" (UID: \"304e6123-7015-427f-a6bd-d950c0e6c7d3\") " pod="openstack/cinder-db-create-pc4tl" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.426217 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d82c4b09-addc-4fb4-97ab-aa791a082372-operator-scripts\") pod \"cinder-f88d-account-create-update-8pd96\" (UID: \"d82c4b09-addc-4fb4-97ab-aa791a082372\") " pod="openstack/cinder-f88d-account-create-update-8pd96" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.426247 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1192a1d-8861-4ce2-bfee-1360fecff6e7-config-data\") pod \"watcher-db-sync-vbsj2\" (UID: \"a1192a1d-8861-4ce2-bfee-1360fecff6e7\") " pod="openstack/watcher-db-sync-vbsj2" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.432252 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a1192a1d-8861-4ce2-bfee-1360fecff6e7-db-sync-config-data\") pod \"watcher-db-sync-vbsj2\" (UID: \"a1192a1d-8861-4ce2-bfee-1360fecff6e7\") " pod="openstack/watcher-db-sync-vbsj2" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.432515 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1192a1d-8861-4ce2-bfee-1360fecff6e7-combined-ca-bundle\") pod \"watcher-db-sync-vbsj2\" (UID: \"a1192a1d-8861-4ce2-bfee-1360fecff6e7\") " pod="openstack/watcher-db-sync-vbsj2" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.437961 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1192a1d-8861-4ce2-bfee-1360fecff6e7-config-data\") pod \"watcher-db-sync-vbsj2\" (UID: \"a1192a1d-8861-4ce2-bfee-1360fecff6e7\") " pod="openstack/watcher-db-sync-vbsj2" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.446545 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-b87d-account-create-update-xj66b"] Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.448165 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b87d-account-create-update-xj66b" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.452217 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.453068 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-sq5kv"] Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.456055 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8frrs\" (UniqueName: \"kubernetes.io/projected/a1192a1d-8861-4ce2-bfee-1360fecff6e7-kube-api-access-8frrs\") pod \"watcher-db-sync-vbsj2\" (UID: \"a1192a1d-8861-4ce2-bfee-1360fecff6e7\") " pod="openstack/watcher-db-sync-vbsj2" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.458633 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-b87d-account-create-update-xj66b"] Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.519645 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-vbsj2" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.527630 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d82c4b09-addc-4fb4-97ab-aa791a082372-operator-scripts\") pod \"cinder-f88d-account-create-update-8pd96\" (UID: \"d82c4b09-addc-4fb4-97ab-aa791a082372\") " pod="openstack/cinder-f88d-account-create-update-8pd96" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.527734 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xdjs\" (UniqueName: \"kubernetes.io/projected/304e6123-7015-427f-a6bd-d950c0e6c7d3-kube-api-access-9xdjs\") pod \"cinder-db-create-pc4tl\" (UID: \"304e6123-7015-427f-a6bd-d950c0e6c7d3\") " pod="openstack/cinder-db-create-pc4tl" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.527755 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zxmm\" (UniqueName: \"kubernetes.io/projected/d82c4b09-addc-4fb4-97ab-aa791a082372-kube-api-access-9zxmm\") pod \"cinder-f88d-account-create-update-8pd96\" (UID: \"d82c4b09-addc-4fb4-97ab-aa791a082372\") " pod="openstack/cinder-f88d-account-create-update-8pd96" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.527806 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a68d4f1-5659-4fec-bcf4-c2d1276d56d4-operator-scripts\") pod \"barbican-db-create-sq5kv\" (UID: \"0a68d4f1-5659-4fec-bcf4-c2d1276d56d4\") " pod="openstack/barbican-db-create-sq5kv" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.527831 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/304e6123-7015-427f-a6bd-d950c0e6c7d3-operator-scripts\") pod \"cinder-db-create-pc4tl\" (UID: \"304e6123-7015-427f-a6bd-d950c0e6c7d3\") " pod="openstack/cinder-db-create-pc4tl" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.527886 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkg7f\" (UniqueName: \"kubernetes.io/projected/0a68d4f1-5659-4fec-bcf4-c2d1276d56d4-kube-api-access-wkg7f\") pod \"barbican-db-create-sq5kv\" (UID: \"0a68d4f1-5659-4fec-bcf4-c2d1276d56d4\") " pod="openstack/barbican-db-create-sq5kv" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.527906 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc3d4ddd-7439-4fd2-bc21-d57caa0910a1-operator-scripts\") pod \"barbican-b87d-account-create-update-xj66b\" (UID: \"dc3d4ddd-7439-4fd2-bc21-d57caa0910a1\") " pod="openstack/barbican-b87d-account-create-update-xj66b" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.527932 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhqjb\" (UniqueName: \"kubernetes.io/projected/dc3d4ddd-7439-4fd2-bc21-d57caa0910a1-kube-api-access-dhqjb\") pod \"barbican-b87d-account-create-update-xj66b\" (UID: \"dc3d4ddd-7439-4fd2-bc21-d57caa0910a1\") " pod="openstack/barbican-b87d-account-create-update-xj66b" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.528649 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d82c4b09-addc-4fb4-97ab-aa791a082372-operator-scripts\") pod \"cinder-f88d-account-create-update-8pd96\" (UID: \"d82c4b09-addc-4fb4-97ab-aa791a082372\") " pod="openstack/cinder-f88d-account-create-update-8pd96" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.529916 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/304e6123-7015-427f-a6bd-d950c0e6c7d3-operator-scripts\") pod \"cinder-db-create-pc4tl\" (UID: \"304e6123-7015-427f-a6bd-d950c0e6c7d3\") " pod="openstack/cinder-db-create-pc4tl" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.541050 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-j4drz"] Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.542064 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-j4drz" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.550419 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.550832 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.551001 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.551261 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bj8bx" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.556299 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zxmm\" (UniqueName: \"kubernetes.io/projected/d82c4b09-addc-4fb4-97ab-aa791a082372-kube-api-access-9zxmm\") pod \"cinder-f88d-account-create-update-8pd96\" (UID: \"d82c4b09-addc-4fb4-97ab-aa791a082372\") " pod="openstack/cinder-f88d-account-create-update-8pd96" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.558004 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-j4drz"] Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.564016 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xdjs\" (UniqueName: \"kubernetes.io/projected/304e6123-7015-427f-a6bd-d950c0e6c7d3-kube-api-access-9xdjs\") pod \"cinder-db-create-pc4tl\" (UID: \"304e6123-7015-427f-a6bd-d950c0e6c7d3\") " pod="openstack/cinder-db-create-pc4tl" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.628979 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q64hj\" (UniqueName: \"kubernetes.io/projected/446a849f-df12-4b01-8457-dd5c828dd567-kube-api-access-q64hj\") pod \"keystone-db-sync-j4drz\" (UID: \"446a849f-df12-4b01-8457-dd5c828dd567\") " pod="openstack/keystone-db-sync-j4drz" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.629355 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/446a849f-df12-4b01-8457-dd5c828dd567-combined-ca-bundle\") pod \"keystone-db-sync-j4drz\" (UID: \"446a849f-df12-4b01-8457-dd5c828dd567\") " pod="openstack/keystone-db-sync-j4drz" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.629518 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a68d4f1-5659-4fec-bcf4-c2d1276d56d4-operator-scripts\") pod \"barbican-db-create-sq5kv\" (UID: \"0a68d4f1-5659-4fec-bcf4-c2d1276d56d4\") " pod="openstack/barbican-db-create-sq5kv" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.629837 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkg7f\" (UniqueName: \"kubernetes.io/projected/0a68d4f1-5659-4fec-bcf4-c2d1276d56d4-kube-api-access-wkg7f\") pod \"barbican-db-create-sq5kv\" (UID: \"0a68d4f1-5659-4fec-bcf4-c2d1276d56d4\") " pod="openstack/barbican-db-create-sq5kv" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.629968 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc3d4ddd-7439-4fd2-bc21-d57caa0910a1-operator-scripts\") pod \"barbican-b87d-account-create-update-xj66b\" (UID: \"dc3d4ddd-7439-4fd2-bc21-d57caa0910a1\") " pod="openstack/barbican-b87d-account-create-update-xj66b" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.630088 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhqjb\" (UniqueName: \"kubernetes.io/projected/dc3d4ddd-7439-4fd2-bc21-d57caa0910a1-kube-api-access-dhqjb\") pod \"barbican-b87d-account-create-update-xj66b\" (UID: \"dc3d4ddd-7439-4fd2-bc21-d57caa0910a1\") " pod="openstack/barbican-b87d-account-create-update-xj66b" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.630203 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/446a849f-df12-4b01-8457-dd5c828dd567-config-data\") pod \"keystone-db-sync-j4drz\" (UID: \"446a849f-df12-4b01-8457-dd5c828dd567\") " pod="openstack/keystone-db-sync-j4drz" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.630518 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a68d4f1-5659-4fec-bcf4-c2d1276d56d4-operator-scripts\") pod \"barbican-db-create-sq5kv\" (UID: \"0a68d4f1-5659-4fec-bcf4-c2d1276d56d4\") " pod="openstack/barbican-db-create-sq5kv" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.631173 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc3d4ddd-7439-4fd2-bc21-d57caa0910a1-operator-scripts\") pod \"barbican-b87d-account-create-update-xj66b\" (UID: \"dc3d4ddd-7439-4fd2-bc21-d57caa0910a1\") " pod="openstack/barbican-b87d-account-create-update-xj66b" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.648234 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f88d-account-create-update-8pd96" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.654612 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhqjb\" (UniqueName: \"kubernetes.io/projected/dc3d4ddd-7439-4fd2-bc21-d57caa0910a1-kube-api-access-dhqjb\") pod \"barbican-b87d-account-create-update-xj66b\" (UID: \"dc3d4ddd-7439-4fd2-bc21-d57caa0910a1\") " pod="openstack/barbican-b87d-account-create-update-xj66b" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.676701 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkg7f\" (UniqueName: \"kubernetes.io/projected/0a68d4f1-5659-4fec-bcf4-c2d1276d56d4-kube-api-access-wkg7f\") pod \"barbican-db-create-sq5kv\" (UID: \"0a68d4f1-5659-4fec-bcf4-c2d1276d56d4\") " pod="openstack/barbican-db-create-sq5kv" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.731920 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/446a849f-df12-4b01-8457-dd5c828dd567-combined-ca-bundle\") pod \"keystone-db-sync-j4drz\" (UID: \"446a849f-df12-4b01-8457-dd5c828dd567\") " pod="openstack/keystone-db-sync-j4drz" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.732047 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/446a849f-df12-4b01-8457-dd5c828dd567-config-data\") pod \"keystone-db-sync-j4drz\" (UID: \"446a849f-df12-4b01-8457-dd5c828dd567\") " pod="openstack/keystone-db-sync-j4drz" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.732084 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q64hj\" (UniqueName: \"kubernetes.io/projected/446a849f-df12-4b01-8457-dd5c828dd567-kube-api-access-q64hj\") pod \"keystone-db-sync-j4drz\" (UID: \"446a849f-df12-4b01-8457-dd5c828dd567\") " pod="openstack/keystone-db-sync-j4drz" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.738831 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/446a849f-df12-4b01-8457-dd5c828dd567-config-data\") pod \"keystone-db-sync-j4drz\" (UID: \"446a849f-df12-4b01-8457-dd5c828dd567\") " pod="openstack/keystone-db-sync-j4drz" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.739761 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/446a849f-df12-4b01-8457-dd5c828dd567-combined-ca-bundle\") pod \"keystone-db-sync-j4drz\" (UID: \"446a849f-df12-4b01-8457-dd5c828dd567\") " pod="openstack/keystone-db-sync-j4drz" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.766487 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q64hj\" (UniqueName: \"kubernetes.io/projected/446a849f-df12-4b01-8457-dd5c828dd567-kube-api-access-q64hj\") pod \"keystone-db-sync-j4drz\" (UID: \"446a849f-df12-4b01-8457-dd5c828dd567\") " pod="openstack/keystone-db-sync-j4drz" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.825262 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-sq5kv" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.834711 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b87d-account-create-update-xj66b" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.861239 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pc4tl" Jan 22 10:43:18 crc kubenswrapper[4752]: I0122 10:43:18.948684 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-j4drz" Jan 22 10:43:19 crc kubenswrapper[4752]: I0122 10:43:19.169520 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-f88d-account-create-update-8pd96"] Jan 22 10:43:19 crc kubenswrapper[4752]: I0122 10:43:19.226173 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-vbsj2"] Jan 22 10:43:19 crc kubenswrapper[4752]: W0122 10:43:19.244586 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1192a1d_8861_4ce2_bfee_1360fecff6e7.slice/crio-4abd87f4669ed9f467b6e6217f14893cceab8a4f96f6c0633827c961e97f4c01 WatchSource:0}: Error finding container 4abd87f4669ed9f467b6e6217f14893cceab8a4f96f6c0633827c961e97f4c01: Status 404 returned error can't find the container with id 4abd87f4669ed9f467b6e6217f14893cceab8a4f96f6c0633827c961e97f4c01 Jan 22 10:43:19 crc kubenswrapper[4752]: I0122 10:43:19.558071 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-sq5kv"] Jan 22 10:43:19 crc kubenswrapper[4752]: I0122 10:43:19.701921 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-pc4tl"] Jan 22 10:43:19 crc kubenswrapper[4752]: I0122 10:43:19.713592 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-b87d-account-create-update-xj66b"] Jan 22 10:43:19 crc kubenswrapper[4752]: I0122 10:43:19.779190 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-j4drz"] Jan 22 10:43:19 crc kubenswrapper[4752]: W0122 10:43:19.812133 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod446a849f_df12_4b01_8457_dd5c828dd567.slice/crio-a25383bfb8a5e58275a494fcead6ec12a8349fa12fa9d6245d67b454f02c4a29 WatchSource:0}: Error finding container a25383bfb8a5e58275a494fcead6ec12a8349fa12fa9d6245d67b454f02c4a29: Status 404 returned error can't find the container with id a25383bfb8a5e58275a494fcead6ec12a8349fa12fa9d6245d67b454f02c4a29 Jan 22 10:43:19 crc kubenswrapper[4752]: I0122 10:43:19.978928 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-tnzlz"] Jan 22 10:43:19 crc kubenswrapper[4752]: I0122 10:43:19.988810 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-tnzlz"] Jan 22 10:43:20 crc kubenswrapper[4752]: I0122 10:43:20.086414 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pc4tl" event={"ID":"304e6123-7015-427f-a6bd-d950c0e6c7d3","Type":"ContainerStarted","Data":"948629349e0af91adf24411b75edfe432ea00c1c5c0cc022156ce2ea48364faf"} Jan 22 10:43:20 crc kubenswrapper[4752]: I0122 10:43:20.086461 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pc4tl" event={"ID":"304e6123-7015-427f-a6bd-d950c0e6c7d3","Type":"ContainerStarted","Data":"cf94da15ead17595efc52d2d2b0782da83c3f99c8bdda69b9aabec581517c697"} Jan 22 10:43:20 crc kubenswrapper[4752]: I0122 10:43:20.087783 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-sq5kv" event={"ID":"0a68d4f1-5659-4fec-bcf4-c2d1276d56d4","Type":"ContainerStarted","Data":"f03afed889aeec8cd1d80f13f06c4bbfa0d476c8e562f941b7d58c71e4988cc5"} Jan 22 10:43:20 crc kubenswrapper[4752]: I0122 10:43:20.087950 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-sq5kv" event={"ID":"0a68d4f1-5659-4fec-bcf4-c2d1276d56d4","Type":"ContainerStarted","Data":"34eca2ae48c21aec531695538af6c568bb9b248449a28f28c96a32e87e618080"} Jan 22 10:43:20 crc kubenswrapper[4752]: I0122 10:43:20.090304 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-vbsj2" event={"ID":"a1192a1d-8861-4ce2-bfee-1360fecff6e7","Type":"ContainerStarted","Data":"4abd87f4669ed9f467b6e6217f14893cceab8a4f96f6c0633827c961e97f4c01"} Jan 22 10:43:20 crc kubenswrapper[4752]: I0122 10:43:20.091651 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f88d-account-create-update-8pd96" event={"ID":"d82c4b09-addc-4fb4-97ab-aa791a082372","Type":"ContainerStarted","Data":"a242997d949a0d12d9d6fb06607f89f71814f3a33cddbf2abaaf61569593b35d"} Jan 22 10:43:20 crc kubenswrapper[4752]: I0122 10:43:20.091674 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f88d-account-create-update-8pd96" event={"ID":"d82c4b09-addc-4fb4-97ab-aa791a082372","Type":"ContainerStarted","Data":"cc65137660cddf833e9bc2a8b956c4f8f9fb3c8ed27437dec547cfbbf6ae7f78"} Jan 22 10:43:20 crc kubenswrapper[4752]: I0122 10:43:20.097321 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-j4drz" event={"ID":"446a849f-df12-4b01-8457-dd5c828dd567","Type":"ContainerStarted","Data":"a25383bfb8a5e58275a494fcead6ec12a8349fa12fa9d6245d67b454f02c4a29"} Jan 22 10:43:20 crc kubenswrapper[4752]: I0122 10:43:20.101184 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b87d-account-create-update-xj66b" event={"ID":"dc3d4ddd-7439-4fd2-bc21-d57caa0910a1","Type":"ContainerStarted","Data":"be53f9b7c615452e34ca562a69e57bf4f9f21a8b248698c458a7475b07ca20c5"} Jan 22 10:43:20 crc kubenswrapper[4752]: I0122 10:43:20.162047 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-sq5kv" podStartSLOduration=2.162023613 podStartE2EDuration="2.162023613s" podCreationTimestamp="2026-01-22 10:43:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:43:20.135109509 +0000 UTC m=+1079.365052417" watchObservedRunningTime="2026-01-22 10:43:20.162023613 +0000 UTC m=+1079.391966521" Jan 22 10:43:21 crc kubenswrapper[4752]: I0122 10:43:21.114817 4752 generic.go:334] "Generic (PLEG): container finished" podID="0a68d4f1-5659-4fec-bcf4-c2d1276d56d4" containerID="f03afed889aeec8cd1d80f13f06c4bbfa0d476c8e562f941b7d58c71e4988cc5" exitCode=0 Jan 22 10:43:21 crc kubenswrapper[4752]: I0122 10:43:21.120293 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8" path="/var/lib/kubelet/pods/b8aa6ec4-3fcf-44bb-ba70-b5a565a7f5c8/volumes" Jan 22 10:43:21 crc kubenswrapper[4752]: I0122 10:43:21.121762 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-sq5kv" event={"ID":"0a68d4f1-5659-4fec-bcf4-c2d1276d56d4","Type":"ContainerDied","Data":"f03afed889aeec8cd1d80f13f06c4bbfa0d476c8e562f941b7d58c71e4988cc5"} Jan 22 10:43:21 crc kubenswrapper[4752]: I0122 10:43:21.129599 4752 generic.go:334] "Generic (PLEG): container finished" podID="d82c4b09-addc-4fb4-97ab-aa791a082372" containerID="a242997d949a0d12d9d6fb06607f89f71814f3a33cddbf2abaaf61569593b35d" exitCode=0 Jan 22 10:43:21 crc kubenswrapper[4752]: I0122 10:43:21.129677 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f88d-account-create-update-8pd96" event={"ID":"d82c4b09-addc-4fb4-97ab-aa791a082372","Type":"ContainerDied","Data":"a242997d949a0d12d9d6fb06607f89f71814f3a33cddbf2abaaf61569593b35d"} Jan 22 10:43:21 crc kubenswrapper[4752]: I0122 10:43:21.133131 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-f88d-account-create-update-8pd96" podStartSLOduration=3.133087207 podStartE2EDuration="3.133087207s" podCreationTimestamp="2026-01-22 10:43:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:43:20.165141145 +0000 UTC m=+1079.395084053" watchObservedRunningTime="2026-01-22 10:43:21.133087207 +0000 UTC m=+1080.363030125" Jan 22 10:43:21 crc kubenswrapper[4752]: I0122 10:43:21.135825 4752 generic.go:334] "Generic (PLEG): container finished" podID="dc3d4ddd-7439-4fd2-bc21-d57caa0910a1" containerID="c2d058308616b86ef40c87e03dbc40ff7ac121a980b90cbf24d2dafd01f35c68" exitCode=0 Jan 22 10:43:21 crc kubenswrapper[4752]: I0122 10:43:21.135905 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b87d-account-create-update-xj66b" event={"ID":"dc3d4ddd-7439-4fd2-bc21-d57caa0910a1","Type":"ContainerDied","Data":"c2d058308616b86ef40c87e03dbc40ff7ac121a980b90cbf24d2dafd01f35c68"} Jan 22 10:43:21 crc kubenswrapper[4752]: I0122 10:43:21.141405 4752 generic.go:334] "Generic (PLEG): container finished" podID="304e6123-7015-427f-a6bd-d950c0e6c7d3" containerID="948629349e0af91adf24411b75edfe432ea00c1c5c0cc022156ce2ea48364faf" exitCode=0 Jan 22 10:43:21 crc kubenswrapper[4752]: I0122 10:43:21.141439 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pc4tl" event={"ID":"304e6123-7015-427f-a6bd-d950c0e6c7d3","Type":"ContainerDied","Data":"948629349e0af91adf24411b75edfe432ea00c1c5c0cc022156ce2ea48364faf"} Jan 22 10:43:21 crc kubenswrapper[4752]: I0122 10:43:21.649354 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-946dbfbcf-dtqkt" Jan 22 10:43:21 crc kubenswrapper[4752]: I0122 10:43:21.736504 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55ddfd5dfc-4mjcj"] Jan 22 10:43:21 crc kubenswrapper[4752]: I0122 10:43:21.736759 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55ddfd5dfc-4mjcj" podUID="8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e" containerName="dnsmasq-dns" containerID="cri-o://42ac7c02f6c6b5a4960999b646809c2016231386685668322a157f0e60127f8f" gracePeriod=10 Jan 22 10:43:21 crc kubenswrapper[4752]: I0122 10:43:21.947098 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 22 10:43:22 crc kubenswrapper[4752]: I0122 10:43:22.167908 4752 generic.go:334] "Generic (PLEG): container finished" podID="8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e" containerID="42ac7c02f6c6b5a4960999b646809c2016231386685668322a157f0e60127f8f" exitCode=0 Jan 22 10:43:22 crc kubenswrapper[4752]: I0122 10:43:22.168183 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55ddfd5dfc-4mjcj" event={"ID":"8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e","Type":"ContainerDied","Data":"42ac7c02f6c6b5a4960999b646809c2016231386685668322a157f0e60127f8f"} Jan 22 10:43:24 crc kubenswrapper[4752]: I0122 10:43:24.786979 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:24 crc kubenswrapper[4752]: I0122 10:43:24.796360 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:24 crc kubenswrapper[4752]: I0122 10:43:24.936507 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-rndfm"] Jan 22 10:43:24 crc kubenswrapper[4752]: I0122 10:43:24.940081 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rndfm" Jan 22 10:43:24 crc kubenswrapper[4752]: I0122 10:43:24.946089 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-rndfm"] Jan 22 10:43:24 crc kubenswrapper[4752]: I0122 10:43:24.946847 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 22 10:43:24 crc kubenswrapper[4752]: I0122 10:43:24.962431 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lbv7\" (UniqueName: \"kubernetes.io/projected/ed2637ca-135b-4963-849a-d95c79b04aea-kube-api-access-9lbv7\") pod \"root-account-create-update-rndfm\" (UID: \"ed2637ca-135b-4963-849a-d95c79b04aea\") " pod="openstack/root-account-create-update-rndfm" Jan 22 10:43:24 crc kubenswrapper[4752]: I0122 10:43:24.962524 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed2637ca-135b-4963-849a-d95c79b04aea-operator-scripts\") pod \"root-account-create-update-rndfm\" (UID: \"ed2637ca-135b-4963-849a-d95c79b04aea\") " pod="openstack/root-account-create-update-rndfm" Jan 22 10:43:25 crc kubenswrapper[4752]: I0122 10:43:25.063895 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lbv7\" (UniqueName: \"kubernetes.io/projected/ed2637ca-135b-4963-849a-d95c79b04aea-kube-api-access-9lbv7\") pod \"root-account-create-update-rndfm\" (UID: \"ed2637ca-135b-4963-849a-d95c79b04aea\") " pod="openstack/root-account-create-update-rndfm" Jan 22 10:43:25 crc kubenswrapper[4752]: I0122 10:43:25.063946 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed2637ca-135b-4963-849a-d95c79b04aea-operator-scripts\") pod \"root-account-create-update-rndfm\" (UID: \"ed2637ca-135b-4963-849a-d95c79b04aea\") " pod="openstack/root-account-create-update-rndfm" Jan 22 10:43:25 crc kubenswrapper[4752]: I0122 10:43:25.064800 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed2637ca-135b-4963-849a-d95c79b04aea-operator-scripts\") pod \"root-account-create-update-rndfm\" (UID: \"ed2637ca-135b-4963-849a-d95c79b04aea\") " pod="openstack/root-account-create-update-rndfm" Jan 22 10:43:25 crc kubenswrapper[4752]: I0122 10:43:25.092539 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lbv7\" (UniqueName: \"kubernetes.io/projected/ed2637ca-135b-4963-849a-d95c79b04aea-kube-api-access-9lbv7\") pod \"root-account-create-update-rndfm\" (UID: \"ed2637ca-135b-4963-849a-d95c79b04aea\") " pod="openstack/root-account-create-update-rndfm" Jan 22 10:43:25 crc kubenswrapper[4752]: I0122 10:43:25.198115 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:25 crc kubenswrapper[4752]: I0122 10:43:25.261277 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rndfm" Jan 22 10:43:27 crc kubenswrapper[4752]: I0122 10:43:27.896054 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55ddfd5dfc-4mjcj" podUID="8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: i/o timeout" Jan 22 10:43:27 crc kubenswrapper[4752]: I0122 10:43:27.959257 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 10:43:27 crc kubenswrapper[4752]: I0122 10:43:27.959501 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9750781f-e5d3-4106-ac9e-431b017df583" containerName="config-reloader" containerID="cri-o://57537c585f30b507b452c4fda8255d7785f54efc4247a2458aba85e17537ff34" gracePeriod=600 Jan 22 10:43:27 crc kubenswrapper[4752]: I0122 10:43:27.959641 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9750781f-e5d3-4106-ac9e-431b017df583" containerName="thanos-sidecar" containerID="cri-o://499ee119498e34cd9a429c13d54dc6a2976421d77cf36d0c0dc90a60031d6727" gracePeriod=600 Jan 22 10:43:27 crc kubenswrapper[4752]: I0122 10:43:27.959654 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9750781f-e5d3-4106-ac9e-431b017df583" containerName="prometheus" containerID="cri-o://6eeddeb49a6a8fbc746ba234786d11d42995c02d703c5600974ffe9294ce6443" gracePeriod=600 Jan 22 10:43:27 crc kubenswrapper[4752]: I0122 10:43:27.970732 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.162043 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b87d-account-create-update-xj66b" Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.171620 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f88d-account-create-update-8pd96" Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.195246 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55ddfd5dfc-4mjcj" Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.266101 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f88d-account-create-update-8pd96" event={"ID":"d82c4b09-addc-4fb4-97ab-aa791a082372","Type":"ContainerDied","Data":"cc65137660cddf833e9bc2a8b956c4f8f9fb3c8ed27437dec547cfbbf6ae7f78"} Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.266147 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc65137660cddf833e9bc2a8b956c4f8f9fb3c8ed27437dec547cfbbf6ae7f78" Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.266214 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f88d-account-create-update-8pd96" Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.271327 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-notifications-server-0" Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.298413 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b87d-account-create-update-xj66b" event={"ID":"dc3d4ddd-7439-4fd2-bc21-d57caa0910a1","Type":"ContainerDied","Data":"be53f9b7c615452e34ca562a69e57bf4f9f21a8b248698c458a7475b07ca20c5"} Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.298456 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be53f9b7c615452e34ca562a69e57bf4f9f21a8b248698c458a7475b07ca20c5" Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.298519 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b87d-account-create-update-xj66b" Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.314557 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhqjb\" (UniqueName: \"kubernetes.io/projected/dc3d4ddd-7439-4fd2-bc21-d57caa0910a1-kube-api-access-dhqjb\") pod \"dc3d4ddd-7439-4fd2-bc21-d57caa0910a1\" (UID: \"dc3d4ddd-7439-4fd2-bc21-d57caa0910a1\") " Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.314630 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d82c4b09-addc-4fb4-97ab-aa791a082372-operator-scripts\") pod \"d82c4b09-addc-4fb4-97ab-aa791a082372\" (UID: \"d82c4b09-addc-4fb4-97ab-aa791a082372\") " Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.314724 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e-ovsdbserver-sb\") pod \"8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e\" (UID: \"8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e\") " Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.314776 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zxmm\" (UniqueName: \"kubernetes.io/projected/d82c4b09-addc-4fb4-97ab-aa791a082372-kube-api-access-9zxmm\") pod \"d82c4b09-addc-4fb4-97ab-aa791a082372\" (UID: \"d82c4b09-addc-4fb4-97ab-aa791a082372\") " Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.314901 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e-config\") pod \"8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e\" (UID: \"8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e\") " Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.314940 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm6b9\" (UniqueName: \"kubernetes.io/projected/8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e-kube-api-access-dm6b9\") pod \"8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e\" (UID: \"8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e\") " Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.315191 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e-dns-svc\") pod \"8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e\" (UID: \"8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e\") " Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.315275 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc3d4ddd-7439-4fd2-bc21-d57caa0910a1-operator-scripts\") pod \"dc3d4ddd-7439-4fd2-bc21-d57caa0910a1\" (UID: \"dc3d4ddd-7439-4fd2-bc21-d57caa0910a1\") " Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.316311 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e-ovsdbserver-nb\") pod \"8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e\" (UID: \"8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e\") " Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.334230 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d82c4b09-addc-4fb4-97ab-aa791a082372-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d82c4b09-addc-4fb4-97ab-aa791a082372" (UID: "d82c4b09-addc-4fb4-97ab-aa791a082372"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.336426 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc3d4ddd-7439-4fd2-bc21-d57caa0910a1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc3d4ddd-7439-4fd2-bc21-d57caa0910a1" (UID: "dc3d4ddd-7439-4fd2-bc21-d57caa0910a1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.343819 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55ddfd5dfc-4mjcj" event={"ID":"8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e","Type":"ContainerDied","Data":"6dd5dd4b89f8c719edcae3d9da89e02b3c2f71e555d43d7bfc306414757f18a7"} Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.343896 4752 scope.go:117] "RemoveContainer" containerID="42ac7c02f6c6b5a4960999b646809c2016231386685668322a157f0e60127f8f" Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.344058 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55ddfd5dfc-4mjcj" Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.365133 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc3d4ddd-7439-4fd2-bc21-d57caa0910a1-kube-api-access-dhqjb" (OuterVolumeSpecName: "kube-api-access-dhqjb") pod "dc3d4ddd-7439-4fd2-bc21-d57caa0910a1" (UID: "dc3d4ddd-7439-4fd2-bc21-d57caa0910a1"). InnerVolumeSpecName "kube-api-access-dhqjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.371107 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d82c4b09-addc-4fb4-97ab-aa791a082372-kube-api-access-9zxmm" (OuterVolumeSpecName: "kube-api-access-9zxmm") pod "d82c4b09-addc-4fb4-97ab-aa791a082372" (UID: "d82c4b09-addc-4fb4-97ab-aa791a082372"). InnerVolumeSpecName "kube-api-access-9zxmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.376143 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e-kube-api-access-dm6b9" (OuterVolumeSpecName: "kube-api-access-dm6b9") pod "8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e" (UID: "8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e"). InnerVolumeSpecName "kube-api-access-dm6b9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.423336 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhqjb\" (UniqueName: \"kubernetes.io/projected/dc3d4ddd-7439-4fd2-bc21-d57caa0910a1-kube-api-access-dhqjb\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.423370 4752 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d82c4b09-addc-4fb4-97ab-aa791a082372-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.423384 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zxmm\" (UniqueName: \"kubernetes.io/projected/d82c4b09-addc-4fb4-97ab-aa791a082372-kube-api-access-9zxmm\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.423395 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dm6b9\" (UniqueName: \"kubernetes.io/projected/8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e-kube-api-access-dm6b9\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.423408 4752 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc3d4ddd-7439-4fd2-bc21-d57caa0910a1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.432786 4752 generic.go:334] "Generic (PLEG): container finished" podID="9750781f-e5d3-4106-ac9e-431b017df583" containerID="6eeddeb49a6a8fbc746ba234786d11d42995c02d703c5600974ffe9294ce6443" exitCode=0 Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.432817 4752 generic.go:334] "Generic (PLEG): container finished" podID="9750781f-e5d3-4106-ac9e-431b017df583" containerID="499ee119498e34cd9a429c13d54dc6a2976421d77cf36d0c0dc90a60031d6727" exitCode=0 Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.432836 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9750781f-e5d3-4106-ac9e-431b017df583","Type":"ContainerDied","Data":"6eeddeb49a6a8fbc746ba234786d11d42995c02d703c5600974ffe9294ce6443"} Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.432871 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9750781f-e5d3-4106-ac9e-431b017df583","Type":"ContainerDied","Data":"499ee119498e34cd9a429c13d54dc6a2976421d77cf36d0c0dc90a60031d6727"} Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.461241 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e-config" (OuterVolumeSpecName: "config") pod "8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e" (UID: "8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.484362 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e" (UID: "8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.489808 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e" (UID: "8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.496990 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e" (UID: "8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.524395 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.524430 4752 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.524441 4752 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.524451 4752 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.678706 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55ddfd5dfc-4mjcj"] Jan 22 10:43:28 crc kubenswrapper[4752]: I0122 10:43:28.687418 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55ddfd5dfc-4mjcj"] Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.115419 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e" path="/var/lib/kubelet/pods/8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e/volumes" Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.446697 4752 generic.go:334] "Generic (PLEG): container finished" podID="9750781f-e5d3-4106-ac9e-431b017df583" containerID="57537c585f30b507b452c4fda8255d7785f54efc4247a2458aba85e17537ff34" exitCode=0 Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.446743 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9750781f-e5d3-4106-ac9e-431b017df583","Type":"ContainerDied","Data":"57537c585f30b507b452c4fda8255d7785f54efc4247a2458aba85e17537ff34"} Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.644228 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-rnpv4"] Jan 22 10:43:29 crc kubenswrapper[4752]: E0122 10:43:29.644567 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d82c4b09-addc-4fb4-97ab-aa791a082372" containerName="mariadb-account-create-update" Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.644583 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="d82c4b09-addc-4fb4-97ab-aa791a082372" containerName="mariadb-account-create-update" Jan 22 10:43:29 crc kubenswrapper[4752]: E0122 10:43:29.644596 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e" containerName="init" Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.644603 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e" containerName="init" Jan 22 10:43:29 crc kubenswrapper[4752]: E0122 10:43:29.644612 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e" containerName="dnsmasq-dns" Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.644618 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e" containerName="dnsmasq-dns" Jan 22 10:43:29 crc kubenswrapper[4752]: E0122 10:43:29.644640 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc3d4ddd-7439-4fd2-bc21-d57caa0910a1" containerName="mariadb-account-create-update" Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.644646 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc3d4ddd-7439-4fd2-bc21-d57caa0910a1" containerName="mariadb-account-create-update" Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.644824 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc3d4ddd-7439-4fd2-bc21-d57caa0910a1" containerName="mariadb-account-create-update" Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.644835 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e" containerName="dnsmasq-dns" Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.644847 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="d82c4b09-addc-4fb4-97ab-aa791a082372" containerName="mariadb-account-create-update" Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.645390 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rnpv4" Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.659714 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-rnpv4"] Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.748537 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-88fb-account-create-update-lg4pl"] Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.750030 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-88fb-account-create-update-lg4pl" Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.751902 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.754071 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdc69\" (UniqueName: \"kubernetes.io/projected/b1709564-73da-4021-8b4a-865eb06625c0-kube-api-access-qdc69\") pod \"glance-db-create-rnpv4\" (UID: \"b1709564-73da-4021-8b4a-865eb06625c0\") " pod="openstack/glance-db-create-rnpv4" Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.754110 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1709564-73da-4021-8b4a-865eb06625c0-operator-scripts\") pod \"glance-db-create-rnpv4\" (UID: \"b1709564-73da-4021-8b4a-865eb06625c0\") " pod="openstack/glance-db-create-rnpv4" Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.776031 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-88fb-account-create-update-lg4pl"] Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.788536 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="9750781f-e5d3-4106-ac9e-431b017df583" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.112:9090/-/ready\": dial tcp 10.217.0.112:9090: connect: connection refused" Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.855820 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbsmq\" (UniqueName: \"kubernetes.io/projected/33a79a45-be65-4f75-be4a-75f5f1f87ce3-kube-api-access-nbsmq\") pod \"glance-88fb-account-create-update-lg4pl\" (UID: \"33a79a45-be65-4f75-be4a-75f5f1f87ce3\") " pod="openstack/glance-88fb-account-create-update-lg4pl" Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.856019 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a79a45-be65-4f75-be4a-75f5f1f87ce3-operator-scripts\") pod \"glance-88fb-account-create-update-lg4pl\" (UID: \"33a79a45-be65-4f75-be4a-75f5f1f87ce3\") " pod="openstack/glance-88fb-account-create-update-lg4pl" Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.856069 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdc69\" (UniqueName: \"kubernetes.io/projected/b1709564-73da-4021-8b4a-865eb06625c0-kube-api-access-qdc69\") pod \"glance-db-create-rnpv4\" (UID: \"b1709564-73da-4021-8b4a-865eb06625c0\") " pod="openstack/glance-db-create-rnpv4" Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.856096 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1709564-73da-4021-8b4a-865eb06625c0-operator-scripts\") pod \"glance-db-create-rnpv4\" (UID: \"b1709564-73da-4021-8b4a-865eb06625c0\") " pod="openstack/glance-db-create-rnpv4" Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.856837 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1709564-73da-4021-8b4a-865eb06625c0-operator-scripts\") pod \"glance-db-create-rnpv4\" (UID: \"b1709564-73da-4021-8b4a-865eb06625c0\") " pod="openstack/glance-db-create-rnpv4" Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.861618 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-s2g9g"] Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.863015 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-s2g9g" Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.867639 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-s2g9g"] Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.876890 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdc69\" (UniqueName: \"kubernetes.io/projected/b1709564-73da-4021-8b4a-865eb06625c0-kube-api-access-qdc69\") pod \"glance-db-create-rnpv4\" (UID: \"b1709564-73da-4021-8b4a-865eb06625c0\") " pod="openstack/glance-db-create-rnpv4" Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.955344 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-80e3-account-create-update-59d9n"] Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.956968 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-80e3-account-create-update-59d9n" Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.957169 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-925g6\" (UniqueName: \"kubernetes.io/projected/6604eed0-a1d2-4ac2-9dba-66e4228899ec-kube-api-access-925g6\") pod \"neutron-db-create-s2g9g\" (UID: \"6604eed0-a1d2-4ac2-9dba-66e4228899ec\") " pod="openstack/neutron-db-create-s2g9g" Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.957233 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbsmq\" (UniqueName: \"kubernetes.io/projected/33a79a45-be65-4f75-be4a-75f5f1f87ce3-kube-api-access-nbsmq\") pod \"glance-88fb-account-create-update-lg4pl\" (UID: \"33a79a45-be65-4f75-be4a-75f5f1f87ce3\") " pod="openstack/glance-88fb-account-create-update-lg4pl" Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.957328 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6604eed0-a1d2-4ac2-9dba-66e4228899ec-operator-scripts\") pod \"neutron-db-create-s2g9g\" (UID: \"6604eed0-a1d2-4ac2-9dba-66e4228899ec\") " pod="openstack/neutron-db-create-s2g9g" Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.957392 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a79a45-be65-4f75-be4a-75f5f1f87ce3-operator-scripts\") pod \"glance-88fb-account-create-update-lg4pl\" (UID: \"33a79a45-be65-4f75-be4a-75f5f1f87ce3\") " pod="openstack/glance-88fb-account-create-update-lg4pl" Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.958267 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a79a45-be65-4f75-be4a-75f5f1f87ce3-operator-scripts\") pod \"glance-88fb-account-create-update-lg4pl\" (UID: \"33a79a45-be65-4f75-be4a-75f5f1f87ce3\") " pod="openstack/glance-88fb-account-create-update-lg4pl" Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.961087 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.962183 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rnpv4" Jan 22 10:43:29 crc kubenswrapper[4752]: I0122 10:43:29.976696 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-80e3-account-create-update-59d9n"] Jan 22 10:43:30 crc kubenswrapper[4752]: I0122 10:43:30.007526 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbsmq\" (UniqueName: \"kubernetes.io/projected/33a79a45-be65-4f75-be4a-75f5f1f87ce3-kube-api-access-nbsmq\") pod \"glance-88fb-account-create-update-lg4pl\" (UID: \"33a79a45-be65-4f75-be4a-75f5f1f87ce3\") " pod="openstack/glance-88fb-account-create-update-lg4pl" Jan 22 10:43:30 crc kubenswrapper[4752]: I0122 10:43:30.058688 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd0a11ce-1fe6-4f39-b8f5-4fb45730b889-operator-scripts\") pod \"neutron-80e3-account-create-update-59d9n\" (UID: \"dd0a11ce-1fe6-4f39-b8f5-4fb45730b889\") " pod="openstack/neutron-80e3-account-create-update-59d9n" Jan 22 10:43:30 crc kubenswrapper[4752]: I0122 10:43:30.058801 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-925g6\" (UniqueName: \"kubernetes.io/projected/6604eed0-a1d2-4ac2-9dba-66e4228899ec-kube-api-access-925g6\") pod \"neutron-db-create-s2g9g\" (UID: \"6604eed0-a1d2-4ac2-9dba-66e4228899ec\") " pod="openstack/neutron-db-create-s2g9g" Jan 22 10:43:30 crc kubenswrapper[4752]: I0122 10:43:30.058962 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x524\" (UniqueName: \"kubernetes.io/projected/dd0a11ce-1fe6-4f39-b8f5-4fb45730b889-kube-api-access-7x524\") pod \"neutron-80e3-account-create-update-59d9n\" (UID: \"dd0a11ce-1fe6-4f39-b8f5-4fb45730b889\") " pod="openstack/neutron-80e3-account-create-update-59d9n" Jan 22 10:43:30 crc kubenswrapper[4752]: I0122 10:43:30.059132 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6604eed0-a1d2-4ac2-9dba-66e4228899ec-operator-scripts\") pod \"neutron-db-create-s2g9g\" (UID: \"6604eed0-a1d2-4ac2-9dba-66e4228899ec\") " pod="openstack/neutron-db-create-s2g9g" Jan 22 10:43:30 crc kubenswrapper[4752]: I0122 10:43:30.059755 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6604eed0-a1d2-4ac2-9dba-66e4228899ec-operator-scripts\") pod \"neutron-db-create-s2g9g\" (UID: \"6604eed0-a1d2-4ac2-9dba-66e4228899ec\") " pod="openstack/neutron-db-create-s2g9g" Jan 22 10:43:30 crc kubenswrapper[4752]: I0122 10:43:30.065486 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-88fb-account-create-update-lg4pl" Jan 22 10:43:30 crc kubenswrapper[4752]: I0122 10:43:30.074102 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-925g6\" (UniqueName: \"kubernetes.io/projected/6604eed0-a1d2-4ac2-9dba-66e4228899ec-kube-api-access-925g6\") pod \"neutron-db-create-s2g9g\" (UID: \"6604eed0-a1d2-4ac2-9dba-66e4228899ec\") " pod="openstack/neutron-db-create-s2g9g" Jan 22 10:43:30 crc kubenswrapper[4752]: I0122 10:43:30.160817 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd0a11ce-1fe6-4f39-b8f5-4fb45730b889-operator-scripts\") pod \"neutron-80e3-account-create-update-59d9n\" (UID: \"dd0a11ce-1fe6-4f39-b8f5-4fb45730b889\") " pod="openstack/neutron-80e3-account-create-update-59d9n" Jan 22 10:43:30 crc kubenswrapper[4752]: I0122 10:43:30.161004 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x524\" (UniqueName: \"kubernetes.io/projected/dd0a11ce-1fe6-4f39-b8f5-4fb45730b889-kube-api-access-7x524\") pod \"neutron-80e3-account-create-update-59d9n\" (UID: \"dd0a11ce-1fe6-4f39-b8f5-4fb45730b889\") " pod="openstack/neutron-80e3-account-create-update-59d9n" Jan 22 10:43:30 crc kubenswrapper[4752]: I0122 10:43:30.161618 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd0a11ce-1fe6-4f39-b8f5-4fb45730b889-operator-scripts\") pod \"neutron-80e3-account-create-update-59d9n\" (UID: \"dd0a11ce-1fe6-4f39-b8f5-4fb45730b889\") " pod="openstack/neutron-80e3-account-create-update-59d9n" Jan 22 10:43:30 crc kubenswrapper[4752]: I0122 10:43:30.185058 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x524\" (UniqueName: \"kubernetes.io/projected/dd0a11ce-1fe6-4f39-b8f5-4fb45730b889-kube-api-access-7x524\") pod \"neutron-80e3-account-create-update-59d9n\" (UID: \"dd0a11ce-1fe6-4f39-b8f5-4fb45730b889\") " pod="openstack/neutron-80e3-account-create-update-59d9n" Jan 22 10:43:30 crc kubenswrapper[4752]: I0122 10:43:30.220591 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-s2g9g" Jan 22 10:43:30 crc kubenswrapper[4752]: I0122 10:43:30.285439 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-80e3-account-create-update-59d9n" Jan 22 10:43:32 crc kubenswrapper[4752]: I0122 10:43:32.897154 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55ddfd5dfc-4mjcj" podUID="8ba24cc8-dd8f-48b1-8fa9-cfc8306bfc5e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: i/o timeout" Jan 22 10:43:34 crc kubenswrapper[4752]: I0122 10:43:34.617153 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-sq5kv" Jan 22 10:43:34 crc kubenswrapper[4752]: I0122 10:43:34.622830 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pc4tl" Jan 22 10:43:34 crc kubenswrapper[4752]: I0122 10:43:34.742121 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkg7f\" (UniqueName: \"kubernetes.io/projected/0a68d4f1-5659-4fec-bcf4-c2d1276d56d4-kube-api-access-wkg7f\") pod \"0a68d4f1-5659-4fec-bcf4-c2d1276d56d4\" (UID: \"0a68d4f1-5659-4fec-bcf4-c2d1276d56d4\") " Jan 22 10:43:34 crc kubenswrapper[4752]: I0122 10:43:34.742232 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a68d4f1-5659-4fec-bcf4-c2d1276d56d4-operator-scripts\") pod \"0a68d4f1-5659-4fec-bcf4-c2d1276d56d4\" (UID: \"0a68d4f1-5659-4fec-bcf4-c2d1276d56d4\") " Jan 22 10:43:34 crc kubenswrapper[4752]: I0122 10:43:34.742261 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/304e6123-7015-427f-a6bd-d950c0e6c7d3-operator-scripts\") pod \"304e6123-7015-427f-a6bd-d950c0e6c7d3\" (UID: \"304e6123-7015-427f-a6bd-d950c0e6c7d3\") " Jan 22 10:43:34 crc kubenswrapper[4752]: I0122 10:43:34.742336 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xdjs\" (UniqueName: \"kubernetes.io/projected/304e6123-7015-427f-a6bd-d950c0e6c7d3-kube-api-access-9xdjs\") pod \"304e6123-7015-427f-a6bd-d950c0e6c7d3\" (UID: \"304e6123-7015-427f-a6bd-d950c0e6c7d3\") " Jan 22 10:43:34 crc kubenswrapper[4752]: I0122 10:43:34.742947 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a68d4f1-5659-4fec-bcf4-c2d1276d56d4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0a68d4f1-5659-4fec-bcf4-c2d1276d56d4" (UID: "0a68d4f1-5659-4fec-bcf4-c2d1276d56d4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:43:34 crc kubenswrapper[4752]: I0122 10:43:34.743004 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/304e6123-7015-427f-a6bd-d950c0e6c7d3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "304e6123-7015-427f-a6bd-d950c0e6c7d3" (UID: "304e6123-7015-427f-a6bd-d950c0e6c7d3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:43:34 crc kubenswrapper[4752]: I0122 10:43:34.748922 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/304e6123-7015-427f-a6bd-d950c0e6c7d3-kube-api-access-9xdjs" (OuterVolumeSpecName: "kube-api-access-9xdjs") pod "304e6123-7015-427f-a6bd-d950c0e6c7d3" (UID: "304e6123-7015-427f-a6bd-d950c0e6c7d3"). InnerVolumeSpecName "kube-api-access-9xdjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:43:34 crc kubenswrapper[4752]: I0122 10:43:34.760170 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a68d4f1-5659-4fec-bcf4-c2d1276d56d4-kube-api-access-wkg7f" (OuterVolumeSpecName: "kube-api-access-wkg7f") pod "0a68d4f1-5659-4fec-bcf4-c2d1276d56d4" (UID: "0a68d4f1-5659-4fec-bcf4-c2d1276d56d4"). InnerVolumeSpecName "kube-api-access-wkg7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:43:34 crc kubenswrapper[4752]: I0122 10:43:34.786796 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="9750781f-e5d3-4106-ac9e-431b017df583" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.112:9090/-/ready\": dial tcp 10.217.0.112:9090: connect: connection refused" Jan 22 10:43:34 crc kubenswrapper[4752]: I0122 10:43:34.844135 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkg7f\" (UniqueName: \"kubernetes.io/projected/0a68d4f1-5659-4fec-bcf4-c2d1276d56d4-kube-api-access-wkg7f\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:34 crc kubenswrapper[4752]: I0122 10:43:34.844173 4752 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a68d4f1-5659-4fec-bcf4-c2d1276d56d4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:34 crc kubenswrapper[4752]: I0122 10:43:34.844184 4752 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/304e6123-7015-427f-a6bd-d950c0e6c7d3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:34 crc kubenswrapper[4752]: I0122 10:43:34.844194 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xdjs\" (UniqueName: \"kubernetes.io/projected/304e6123-7015-427f-a6bd-d950c0e6c7d3-kube-api-access-9xdjs\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:35 crc kubenswrapper[4752]: I0122 10:43:35.501441 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pc4tl" Jan 22 10:43:35 crc kubenswrapper[4752]: I0122 10:43:35.501634 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pc4tl" event={"ID":"304e6123-7015-427f-a6bd-d950c0e6c7d3","Type":"ContainerDied","Data":"cf94da15ead17595efc52d2d2b0782da83c3f99c8bdda69b9aabec581517c697"} Jan 22 10:43:35 crc kubenswrapper[4752]: I0122 10:43:35.501807 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf94da15ead17595efc52d2d2b0782da83c3f99c8bdda69b9aabec581517c697" Jan 22 10:43:35 crc kubenswrapper[4752]: I0122 10:43:35.503593 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-sq5kv" event={"ID":"0a68d4f1-5659-4fec-bcf4-c2d1276d56d4","Type":"ContainerDied","Data":"34eca2ae48c21aec531695538af6c568bb9b248449a28f28c96a32e87e618080"} Jan 22 10:43:35 crc kubenswrapper[4752]: I0122 10:43:35.503627 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34eca2ae48c21aec531695538af6c568bb9b248449a28f28c96a32e87e618080" Jan 22 10:43:35 crc kubenswrapper[4752]: I0122 10:43:35.503670 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-sq5kv" Jan 22 10:43:35 crc kubenswrapper[4752]: E0122 10:43:35.717368 4752 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.32:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Jan 22 10:43:35 crc kubenswrapper[4752]: E0122 10:43:35.717439 4752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.32:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Jan 22 10:43:35 crc kubenswrapper[4752]: E0122 10:43:35.717586 4752 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-db-sync,Image:38.102.83.32:5001/podified-master-centos10/openstack-watcher-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/watcher/watcher.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8frrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-db-sync-vbsj2_openstack(a1192a1d-8861-4ce2-bfee-1360fecff6e7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 10:43:35 crc kubenswrapper[4752]: E0122 10:43:35.718760 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/watcher-db-sync-vbsj2" podUID="a1192a1d-8861-4ce2-bfee-1360fecff6e7" Jan 22 10:43:35 crc kubenswrapper[4752]: I0122 10:43:35.770780 4752 scope.go:117] "RemoveContainer" containerID="1b3fbde4742ba245b889260a74cef0e89c0d02fe422d5c57a1e692a0d68e4bc0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.149158 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.283363 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c85mq\" (UniqueName: \"kubernetes.io/projected/9750781f-e5d3-4106-ac9e-431b017df583-kube-api-access-c85mq\") pod \"9750781f-e5d3-4106-ac9e-431b017df583\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.283407 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9750781f-e5d3-4106-ac9e-431b017df583-thanos-prometheus-http-client-file\") pod \"9750781f-e5d3-4106-ac9e-431b017df583\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.283433 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9750781f-e5d3-4106-ac9e-431b017df583-prometheus-metric-storage-rulefiles-2\") pod \"9750781f-e5d3-4106-ac9e-431b017df583\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.283470 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9750781f-e5d3-4106-ac9e-431b017df583-prometheus-metric-storage-rulefiles-0\") pod \"9750781f-e5d3-4106-ac9e-431b017df583\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.283507 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9750781f-e5d3-4106-ac9e-431b017df583-prometheus-metric-storage-rulefiles-1\") pod \"9750781f-e5d3-4106-ac9e-431b017df583\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.283545 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9750781f-e5d3-4106-ac9e-431b017df583-config\") pod \"9750781f-e5d3-4106-ac9e-431b017df583\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.283674 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d\") pod \"9750781f-e5d3-4106-ac9e-431b017df583\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.283734 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9750781f-e5d3-4106-ac9e-431b017df583-web-config\") pod \"9750781f-e5d3-4106-ac9e-431b017df583\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.283753 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9750781f-e5d3-4106-ac9e-431b017df583-config-out\") pod \"9750781f-e5d3-4106-ac9e-431b017df583\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.283797 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9750781f-e5d3-4106-ac9e-431b017df583-tls-assets\") pod \"9750781f-e5d3-4106-ac9e-431b017df583\" (UID: \"9750781f-e5d3-4106-ac9e-431b017df583\") " Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.285603 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9750781f-e5d3-4106-ac9e-431b017df583-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "9750781f-e5d3-4106-ac9e-431b017df583" (UID: "9750781f-e5d3-4106-ac9e-431b017df583"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.286019 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9750781f-e5d3-4106-ac9e-431b017df583-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "9750781f-e5d3-4106-ac9e-431b017df583" (UID: "9750781f-e5d3-4106-ac9e-431b017df583"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.286201 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9750781f-e5d3-4106-ac9e-431b017df583-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "9750781f-e5d3-4106-ac9e-431b017df583" (UID: "9750781f-e5d3-4106-ac9e-431b017df583"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.292044 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9750781f-e5d3-4106-ac9e-431b017df583-kube-api-access-c85mq" (OuterVolumeSpecName: "kube-api-access-c85mq") pod "9750781f-e5d3-4106-ac9e-431b017df583" (UID: "9750781f-e5d3-4106-ac9e-431b017df583"). InnerVolumeSpecName "kube-api-access-c85mq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.292154 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9750781f-e5d3-4106-ac9e-431b017df583-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "9750781f-e5d3-4106-ac9e-431b017df583" (UID: "9750781f-e5d3-4106-ac9e-431b017df583"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.293023 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9750781f-e5d3-4106-ac9e-431b017df583-config-out" (OuterVolumeSpecName: "config-out") pod "9750781f-e5d3-4106-ac9e-431b017df583" (UID: "9750781f-e5d3-4106-ac9e-431b017df583"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.297591 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9750781f-e5d3-4106-ac9e-431b017df583-config" (OuterVolumeSpecName: "config") pod "9750781f-e5d3-4106-ac9e-431b017df583" (UID: "9750781f-e5d3-4106-ac9e-431b017df583"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.302943 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9750781f-e5d3-4106-ac9e-431b017df583-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "9750781f-e5d3-4106-ac9e-431b017df583" (UID: "9750781f-e5d3-4106-ac9e-431b017df583"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.318963 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "9750781f-e5d3-4106-ac9e-431b017df583" (UID: "9750781f-e5d3-4106-ac9e-431b017df583"). InnerVolumeSpecName "pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.323563 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9750781f-e5d3-4106-ac9e-431b017df583-web-config" (OuterVolumeSpecName: "web-config") pod "9750781f-e5d3-4106-ac9e-431b017df583" (UID: "9750781f-e5d3-4106-ac9e-431b017df583"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.369508 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-88fb-account-create-update-lg4pl"] Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.383007 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-rndfm"] Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.386060 4752 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9750781f-e5d3-4106-ac9e-431b017df583-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.386093 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c85mq\" (UniqueName: \"kubernetes.io/projected/9750781f-e5d3-4106-ac9e-431b017df583-kube-api-access-c85mq\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.386104 4752 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9750781f-e5d3-4106-ac9e-431b017df583-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.386114 4752 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9750781f-e5d3-4106-ac9e-431b017df583-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.386138 4752 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9750781f-e5d3-4106-ac9e-431b017df583-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.386149 4752 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9750781f-e5d3-4106-ac9e-431b017df583-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.386158 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9750781f-e5d3-4106-ac9e-431b017df583-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.386187 4752 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d\") on node \"crc\" " Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.386197 4752 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9750781f-e5d3-4106-ac9e-431b017df583-web-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.386207 4752 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9750781f-e5d3-4106-ac9e-431b017df583-config-out\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.430216 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-rnpv4"] Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.443310 4752 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.443493 4752 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d") on node "crc" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.443547 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-80e3-account-create-update-59d9n"] Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.452168 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-s2g9g"] Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.487872 4752 reconciler_common.go:293] "Volume detached for volume \"pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.529535 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-j4drz" event={"ID":"446a849f-df12-4b01-8457-dd5c828dd567","Type":"ContainerStarted","Data":"c6856fed9b6819ff1583a4aa95c4cf5e495e35246e5f24e5cdb013b1fb81f02b"} Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.547085 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9750781f-e5d3-4106-ac9e-431b017df583","Type":"ContainerDied","Data":"32076b0542997ce8a1a181f41d634c9f5c1a0003244e2a055203c30a9bb377b7"} Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.547137 4752 scope.go:117] "RemoveContainer" containerID="6eeddeb49a6a8fbc746ba234786d11d42995c02d703c5600974ffe9294ce6443" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.547240 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.558832 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rnpv4" event={"ID":"b1709564-73da-4021-8b4a-865eb06625c0","Type":"ContainerStarted","Data":"bd915710cd15c9467eb4552df3041363667cc61c12d9d73f3149e0d185ea11f6"} Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.561164 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-s2g9g" event={"ID":"6604eed0-a1d2-4ac2-9dba-66e4228899ec","Type":"ContainerStarted","Data":"5d49f9901df3619c0138240c21de727f48fb4535fc652977976b3cc333715af0"} Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.562083 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-80e3-account-create-update-59d9n" event={"ID":"dd0a11ce-1fe6-4f39-b8f5-4fb45730b889","Type":"ContainerStarted","Data":"fecdc593cbb2a7bf8585c8975189be5d210920dd1642a5b39ac10e5d06bb752c"} Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.563812 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rndfm" event={"ID":"ed2637ca-135b-4963-849a-d95c79b04aea","Type":"ContainerStarted","Data":"5c1c2d89c8fbda7750b401acd8b635dcbf4e6633475e59218d284876c928a3a1"} Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.565997 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-88fb-account-create-update-lg4pl" event={"ID":"33a79a45-be65-4f75-be4a-75f5f1f87ce3","Type":"ContainerStarted","Data":"4fdc258053d4731dbaecd511484d781a0110a658e7c90f8a35a866a069d88247"} Jan 22 10:43:36 crc kubenswrapper[4752]: E0122 10:43:36.572742 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.32:5001/podified-master-centos10/openstack-watcher-api:watcher_latest\\\"\"" pod="openstack/watcher-db-sync-vbsj2" podUID="a1192a1d-8861-4ce2-bfee-1360fecff6e7" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.583733 4752 scope.go:117] "RemoveContainer" containerID="499ee119498e34cd9a429c13d54dc6a2976421d77cf36d0c0dc90a60031d6727" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.592626 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-j4drz" podStartSLOduration=2.671535152 podStartE2EDuration="18.59260754s" podCreationTimestamp="2026-01-22 10:43:18 +0000 UTC" firstStartedPulling="2026-01-22 10:43:19.861935153 +0000 UTC m=+1079.091878061" lastFinishedPulling="2026-01-22 10:43:35.783007531 +0000 UTC m=+1095.012950449" observedRunningTime="2026-01-22 10:43:36.554510763 +0000 UTC m=+1095.784453671" watchObservedRunningTime="2026-01-22 10:43:36.59260754 +0000 UTC m=+1095.822550448" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.621203 4752 scope.go:117] "RemoveContainer" containerID="57537c585f30b507b452c4fda8255d7785f54efc4247a2458aba85e17537ff34" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.626342 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.646888 4752 scope.go:117] "RemoveContainer" containerID="2b5c2d1fca13358092182d00f7908299fdf8e03ff7a5c1c1f7fca3e7d84a462a" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.647767 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.657805 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 10:43:36 crc kubenswrapper[4752]: E0122 10:43:36.658182 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9750781f-e5d3-4106-ac9e-431b017df583" containerName="thanos-sidecar" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.658198 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="9750781f-e5d3-4106-ac9e-431b017df583" containerName="thanos-sidecar" Jan 22 10:43:36 crc kubenswrapper[4752]: E0122 10:43:36.658211 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a68d4f1-5659-4fec-bcf4-c2d1276d56d4" containerName="mariadb-database-create" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.658218 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a68d4f1-5659-4fec-bcf4-c2d1276d56d4" containerName="mariadb-database-create" Jan 22 10:43:36 crc kubenswrapper[4752]: E0122 10:43:36.658236 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9750781f-e5d3-4106-ac9e-431b017df583" containerName="init-config-reloader" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.658242 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="9750781f-e5d3-4106-ac9e-431b017df583" containerName="init-config-reloader" Jan 22 10:43:36 crc kubenswrapper[4752]: E0122 10:43:36.658263 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9750781f-e5d3-4106-ac9e-431b017df583" containerName="prometheus" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.658269 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="9750781f-e5d3-4106-ac9e-431b017df583" containerName="prometheus" Jan 22 10:43:36 crc kubenswrapper[4752]: E0122 10:43:36.658275 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9750781f-e5d3-4106-ac9e-431b017df583" containerName="config-reloader" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.658282 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="9750781f-e5d3-4106-ac9e-431b017df583" containerName="config-reloader" Jan 22 10:43:36 crc kubenswrapper[4752]: E0122 10:43:36.658296 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="304e6123-7015-427f-a6bd-d950c0e6c7d3" containerName="mariadb-database-create" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.658302 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="304e6123-7015-427f-a6bd-d950c0e6c7d3" containerName="mariadb-database-create" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.658457 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a68d4f1-5659-4fec-bcf4-c2d1276d56d4" containerName="mariadb-database-create" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.658467 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="9750781f-e5d3-4106-ac9e-431b017df583" containerName="thanos-sidecar" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.658477 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="9750781f-e5d3-4106-ac9e-431b017df583" containerName="prometheus" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.658492 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="9750781f-e5d3-4106-ac9e-431b017df583" containerName="config-reloader" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.658500 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="304e6123-7015-427f-a6bd-d950c0e6c7d3" containerName="mariadb-database-create" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.659949 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.664569 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.668115 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.668271 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.668380 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.668419 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.668444 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.668630 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.673519 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.678159 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-n6xwj" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.701629 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.797717 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7558d250-f7b6-49f0-90a1-b524e8b0d376-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.797774 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7558d250-f7b6-49f0-90a1-b524e8b0d376-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.797814 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.797845 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7558d250-f7b6-49f0-90a1-b524e8b0d376-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.797923 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7558d250-f7b6-49f0-90a1-b524e8b0d376-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.797950 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b945r\" (UniqueName: \"kubernetes.io/projected/7558d250-f7b6-49f0-90a1-b524e8b0d376-kube-api-access-b945r\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.797973 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.798008 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.798038 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-config\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.798066 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.798090 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7558d250-f7b6-49f0-90a1-b524e8b0d376-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.798133 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.798158 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.899178 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7558d250-f7b6-49f0-90a1-b524e8b0d376-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.899231 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.899253 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b945r\" (UniqueName: \"kubernetes.io/projected/7558d250-f7b6-49f0-90a1-b524e8b0d376-kube-api-access-b945r\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.899286 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.899313 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-config\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.899330 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.899350 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7558d250-f7b6-49f0-90a1-b524e8b0d376-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.899382 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.899400 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.899434 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7558d250-f7b6-49f0-90a1-b524e8b0d376-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.899455 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7558d250-f7b6-49f0-90a1-b524e8b0d376-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.899486 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.899512 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7558d250-f7b6-49f0-90a1-b524e8b0d376-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.900260 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7558d250-f7b6-49f0-90a1-b524e8b0d376-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.902235 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7558d250-f7b6-49f0-90a1-b524e8b0d376-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.904712 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7558d250-f7b6-49f0-90a1-b524e8b0d376-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.912147 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.913473 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.918385 4752 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.918426 4752 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d012c5afc253dfb5bb1585a9f32cbc4589affd7948918f5a8ea0a0a38ad6626e/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.929576 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.935307 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b945r\" (UniqueName: \"kubernetes.io/projected/7558d250-f7b6-49f0-90a1-b524e8b0d376-kube-api-access-b945r\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.967655 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7558d250-f7b6-49f0-90a1-b524e8b0d376-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.969705 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.970383 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7558d250-f7b6-49f0-90a1-b524e8b0d376-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.975608 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-config\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:36 crc kubenswrapper[4752]: I0122 10:43:36.976099 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:37 crc kubenswrapper[4752]: I0122 10:43:37.105636 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d\") pod \"prometheus-metric-storage-0\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:37 crc kubenswrapper[4752]: I0122 10:43:37.133670 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9750781f-e5d3-4106-ac9e-431b017df583" path="/var/lib/kubelet/pods/9750781f-e5d3-4106-ac9e-431b017df583/volumes" Jan 22 10:43:37 crc kubenswrapper[4752]: I0122 10:43:37.303324 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 22 10:43:37 crc kubenswrapper[4752]: I0122 10:43:37.579281 4752 generic.go:334] "Generic (PLEG): container finished" podID="6604eed0-a1d2-4ac2-9dba-66e4228899ec" containerID="034ed466fb3bcda9c3e1093ea0a58907cfbb749cd2e6d4ef6db45994d3f716b9" exitCode=0 Jan 22 10:43:37 crc kubenswrapper[4752]: I0122 10:43:37.579375 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-s2g9g" event={"ID":"6604eed0-a1d2-4ac2-9dba-66e4228899ec","Type":"ContainerDied","Data":"034ed466fb3bcda9c3e1093ea0a58907cfbb749cd2e6d4ef6db45994d3f716b9"} Jan 22 10:43:37 crc kubenswrapper[4752]: I0122 10:43:37.588216 4752 generic.go:334] "Generic (PLEG): container finished" podID="dd0a11ce-1fe6-4f39-b8f5-4fb45730b889" containerID="b2526f80b0c3e824fb3dc632719617c537bb92006fb57553cec2626de389085e" exitCode=0 Jan 22 10:43:37 crc kubenswrapper[4752]: I0122 10:43:37.588353 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-80e3-account-create-update-59d9n" event={"ID":"dd0a11ce-1fe6-4f39-b8f5-4fb45730b889","Type":"ContainerDied","Data":"b2526f80b0c3e824fb3dc632719617c537bb92006fb57553cec2626de389085e"} Jan 22 10:43:37 crc kubenswrapper[4752]: I0122 10:43:37.590341 4752 generic.go:334] "Generic (PLEG): container finished" podID="ed2637ca-135b-4963-849a-d95c79b04aea" containerID="1768f4d270d89933ac6a508aeec4bc0a6dfb3b8bd6387ee54580d20315f08e4a" exitCode=0 Jan 22 10:43:37 crc kubenswrapper[4752]: I0122 10:43:37.590404 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rndfm" event={"ID":"ed2637ca-135b-4963-849a-d95c79b04aea","Type":"ContainerDied","Data":"1768f4d270d89933ac6a508aeec4bc0a6dfb3b8bd6387ee54580d20315f08e4a"} Jan 22 10:43:37 crc kubenswrapper[4752]: I0122 10:43:37.592177 4752 generic.go:334] "Generic (PLEG): container finished" podID="33a79a45-be65-4f75-be4a-75f5f1f87ce3" containerID="7398562f3e4c81de74516e17d55f3b22e723dece0718cc7797fe38d521cb865c" exitCode=0 Jan 22 10:43:37 crc kubenswrapper[4752]: I0122 10:43:37.592233 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-88fb-account-create-update-lg4pl" event={"ID":"33a79a45-be65-4f75-be4a-75f5f1f87ce3","Type":"ContainerDied","Data":"7398562f3e4c81de74516e17d55f3b22e723dece0718cc7797fe38d521cb865c"} Jan 22 10:43:37 crc kubenswrapper[4752]: I0122 10:43:37.613397 4752 generic.go:334] "Generic (PLEG): container finished" podID="b1709564-73da-4021-8b4a-865eb06625c0" containerID="1146b32f7ff484fd1ac7d7c5756595aa04e7293e5c331a8570f55ce4b012c520" exitCode=0 Jan 22 10:43:37 crc kubenswrapper[4752]: I0122 10:43:37.614145 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rnpv4" event={"ID":"b1709564-73da-4021-8b4a-865eb06625c0","Type":"ContainerDied","Data":"1146b32f7ff484fd1ac7d7c5756595aa04e7293e5c331a8570f55ce4b012c520"} Jan 22 10:43:37 crc kubenswrapper[4752]: I0122 10:43:37.808067 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 10:43:38 crc kubenswrapper[4752]: I0122 10:43:38.623797 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7558d250-f7b6-49f0-90a1-b524e8b0d376","Type":"ContainerStarted","Data":"92a305d2cb677965360eb6c928de812758c4132c6b406fbf0b837b9601be4214"} Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.019235 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rndfm" Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.144910 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed2637ca-135b-4963-849a-d95c79b04aea-operator-scripts\") pod \"ed2637ca-135b-4963-849a-d95c79b04aea\" (UID: \"ed2637ca-135b-4963-849a-d95c79b04aea\") " Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.145356 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lbv7\" (UniqueName: \"kubernetes.io/projected/ed2637ca-135b-4963-849a-d95c79b04aea-kube-api-access-9lbv7\") pod \"ed2637ca-135b-4963-849a-d95c79b04aea\" (UID: \"ed2637ca-135b-4963-849a-d95c79b04aea\") " Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.145932 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed2637ca-135b-4963-849a-d95c79b04aea-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ed2637ca-135b-4963-849a-d95c79b04aea" (UID: "ed2637ca-135b-4963-849a-d95c79b04aea"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.152068 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed2637ca-135b-4963-849a-d95c79b04aea-kube-api-access-9lbv7" (OuterVolumeSpecName: "kube-api-access-9lbv7") pod "ed2637ca-135b-4963-849a-d95c79b04aea" (UID: "ed2637ca-135b-4963-849a-d95c79b04aea"). InnerVolumeSpecName "kube-api-access-9lbv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.183833 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-s2g9g" Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.190951 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-80e3-account-create-update-59d9n" Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.198123 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-88fb-account-create-update-lg4pl" Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.211341 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rnpv4" Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.246280 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x524\" (UniqueName: \"kubernetes.io/projected/dd0a11ce-1fe6-4f39-b8f5-4fb45730b889-kube-api-access-7x524\") pod \"dd0a11ce-1fe6-4f39-b8f5-4fb45730b889\" (UID: \"dd0a11ce-1fe6-4f39-b8f5-4fb45730b889\") " Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.246384 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6604eed0-a1d2-4ac2-9dba-66e4228899ec-operator-scripts\") pod \"6604eed0-a1d2-4ac2-9dba-66e4228899ec\" (UID: \"6604eed0-a1d2-4ac2-9dba-66e4228899ec\") " Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.246422 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-925g6\" (UniqueName: \"kubernetes.io/projected/6604eed0-a1d2-4ac2-9dba-66e4228899ec-kube-api-access-925g6\") pod \"6604eed0-a1d2-4ac2-9dba-66e4228899ec\" (UID: \"6604eed0-a1d2-4ac2-9dba-66e4228899ec\") " Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.246476 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd0a11ce-1fe6-4f39-b8f5-4fb45730b889-operator-scripts\") pod \"dd0a11ce-1fe6-4f39-b8f5-4fb45730b889\" (UID: \"dd0a11ce-1fe6-4f39-b8f5-4fb45730b889\") " Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.246845 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lbv7\" (UniqueName: \"kubernetes.io/projected/ed2637ca-135b-4963-849a-d95c79b04aea-kube-api-access-9lbv7\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.246965 4752 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed2637ca-135b-4963-849a-d95c79b04aea-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.248442 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd0a11ce-1fe6-4f39-b8f5-4fb45730b889-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd0a11ce-1fe6-4f39-b8f5-4fb45730b889" (UID: "dd0a11ce-1fe6-4f39-b8f5-4fb45730b889"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.252292 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6604eed0-a1d2-4ac2-9dba-66e4228899ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6604eed0-a1d2-4ac2-9dba-66e4228899ec" (UID: "6604eed0-a1d2-4ac2-9dba-66e4228899ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.258223 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6604eed0-a1d2-4ac2-9dba-66e4228899ec-kube-api-access-925g6" (OuterVolumeSpecName: "kube-api-access-925g6") pod "6604eed0-a1d2-4ac2-9dba-66e4228899ec" (UID: "6604eed0-a1d2-4ac2-9dba-66e4228899ec"). InnerVolumeSpecName "kube-api-access-925g6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.258391 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd0a11ce-1fe6-4f39-b8f5-4fb45730b889-kube-api-access-7x524" (OuterVolumeSpecName: "kube-api-access-7x524") pod "dd0a11ce-1fe6-4f39-b8f5-4fb45730b889" (UID: "dd0a11ce-1fe6-4f39-b8f5-4fb45730b889"). InnerVolumeSpecName "kube-api-access-7x524". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.347905 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1709564-73da-4021-8b4a-865eb06625c0-operator-scripts\") pod \"b1709564-73da-4021-8b4a-865eb06625c0\" (UID: \"b1709564-73da-4021-8b4a-865eb06625c0\") " Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.347972 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdc69\" (UniqueName: \"kubernetes.io/projected/b1709564-73da-4021-8b4a-865eb06625c0-kube-api-access-qdc69\") pod \"b1709564-73da-4021-8b4a-865eb06625c0\" (UID: \"b1709564-73da-4021-8b4a-865eb06625c0\") " Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.348076 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a79a45-be65-4f75-be4a-75f5f1f87ce3-operator-scripts\") pod \"33a79a45-be65-4f75-be4a-75f5f1f87ce3\" (UID: \"33a79a45-be65-4f75-be4a-75f5f1f87ce3\") " Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.348370 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1709564-73da-4021-8b4a-865eb06625c0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b1709564-73da-4021-8b4a-865eb06625c0" (UID: "b1709564-73da-4021-8b4a-865eb06625c0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.348525 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbsmq\" (UniqueName: \"kubernetes.io/projected/33a79a45-be65-4f75-be4a-75f5f1f87ce3-kube-api-access-nbsmq\") pod \"33a79a45-be65-4f75-be4a-75f5f1f87ce3\" (UID: \"33a79a45-be65-4f75-be4a-75f5f1f87ce3\") " Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.348553 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33a79a45-be65-4f75-be4a-75f5f1f87ce3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "33a79a45-be65-4f75-be4a-75f5f1f87ce3" (UID: "33a79a45-be65-4f75-be4a-75f5f1f87ce3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.349347 4752 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1709564-73da-4021-8b4a-865eb06625c0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.349374 4752 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6604eed0-a1d2-4ac2-9dba-66e4228899ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.349387 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-925g6\" (UniqueName: \"kubernetes.io/projected/6604eed0-a1d2-4ac2-9dba-66e4228899ec-kube-api-access-925g6\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.349402 4752 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd0a11ce-1fe6-4f39-b8f5-4fb45730b889-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.349415 4752 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a79a45-be65-4f75-be4a-75f5f1f87ce3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.349429 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7x524\" (UniqueName: \"kubernetes.io/projected/dd0a11ce-1fe6-4f39-b8f5-4fb45730b889-kube-api-access-7x524\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.353287 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33a79a45-be65-4f75-be4a-75f5f1f87ce3-kube-api-access-nbsmq" (OuterVolumeSpecName: "kube-api-access-nbsmq") pod "33a79a45-be65-4f75-be4a-75f5f1f87ce3" (UID: "33a79a45-be65-4f75-be4a-75f5f1f87ce3"). InnerVolumeSpecName "kube-api-access-nbsmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.353617 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1709564-73da-4021-8b4a-865eb06625c0-kube-api-access-qdc69" (OuterVolumeSpecName: "kube-api-access-qdc69") pod "b1709564-73da-4021-8b4a-865eb06625c0" (UID: "b1709564-73da-4021-8b4a-865eb06625c0"). InnerVolumeSpecName "kube-api-access-qdc69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.450672 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbsmq\" (UniqueName: \"kubernetes.io/projected/33a79a45-be65-4f75-be4a-75f5f1f87ce3-kube-api-access-nbsmq\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.450706 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdc69\" (UniqueName: \"kubernetes.io/projected/b1709564-73da-4021-8b4a-865eb06625c0-kube-api-access-qdc69\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.632334 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rnpv4" Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.632736 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rnpv4" event={"ID":"b1709564-73da-4021-8b4a-865eb06625c0","Type":"ContainerDied","Data":"bd915710cd15c9467eb4552df3041363667cc61c12d9d73f3149e0d185ea11f6"} Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.632792 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd915710cd15c9467eb4552df3041363667cc61c12d9d73f3149e0d185ea11f6" Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.634531 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-s2g9g" event={"ID":"6604eed0-a1d2-4ac2-9dba-66e4228899ec","Type":"ContainerDied","Data":"5d49f9901df3619c0138240c21de727f48fb4535fc652977976b3cc333715af0"} Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.634556 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-s2g9g" Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.634576 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d49f9901df3619c0138240c21de727f48fb4535fc652977976b3cc333715af0" Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.636221 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-80e3-account-create-update-59d9n" Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.636228 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-80e3-account-create-update-59d9n" event={"ID":"dd0a11ce-1fe6-4f39-b8f5-4fb45730b889","Type":"ContainerDied","Data":"fecdc593cbb2a7bf8585c8975189be5d210920dd1642a5b39ac10e5d06bb752c"} Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.636390 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fecdc593cbb2a7bf8585c8975189be5d210920dd1642a5b39ac10e5d06bb752c" Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.637770 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rndfm" Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.637766 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rndfm" event={"ID":"ed2637ca-135b-4963-849a-d95c79b04aea","Type":"ContainerDied","Data":"5c1c2d89c8fbda7750b401acd8b635dcbf4e6633475e59218d284876c928a3a1"} Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.637988 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c1c2d89c8fbda7750b401acd8b635dcbf4e6633475e59218d284876c928a3a1" Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.647907 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-88fb-account-create-update-lg4pl" event={"ID":"33a79a45-be65-4f75-be4a-75f5f1f87ce3","Type":"ContainerDied","Data":"4fdc258053d4731dbaecd511484d781a0110a658e7c90f8a35a866a069d88247"} Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.647947 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fdc258053d4731dbaecd511484d781a0110a658e7c90f8a35a866a069d88247" Jan 22 10:43:39 crc kubenswrapper[4752]: I0122 10:43:39.648025 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-88fb-account-create-update-lg4pl" Jan 22 10:43:43 crc kubenswrapper[4752]: I0122 10:43:43.751999 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7558d250-f7b6-49f0-90a1-b524e8b0d376","Type":"ContainerStarted","Data":"49f3721381dd33065b70664b5f481a351b1348b0cf38cf24d8a8edf0510036d7"} Jan 22 10:43:45 crc kubenswrapper[4752]: I0122 10:43:45.008128 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-2nc8f"] Jan 22 10:43:45 crc kubenswrapper[4752]: E0122 10:43:45.008901 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed2637ca-135b-4963-849a-d95c79b04aea" containerName="mariadb-account-create-update" Jan 22 10:43:45 crc kubenswrapper[4752]: I0122 10:43:45.008917 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed2637ca-135b-4963-849a-d95c79b04aea" containerName="mariadb-account-create-update" Jan 22 10:43:45 crc kubenswrapper[4752]: E0122 10:43:45.008937 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6604eed0-a1d2-4ac2-9dba-66e4228899ec" containerName="mariadb-database-create" Jan 22 10:43:45 crc kubenswrapper[4752]: I0122 10:43:45.008947 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="6604eed0-a1d2-4ac2-9dba-66e4228899ec" containerName="mariadb-database-create" Jan 22 10:43:45 crc kubenswrapper[4752]: E0122 10:43:45.008971 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd0a11ce-1fe6-4f39-b8f5-4fb45730b889" containerName="mariadb-account-create-update" Jan 22 10:43:45 crc kubenswrapper[4752]: I0122 10:43:45.008979 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd0a11ce-1fe6-4f39-b8f5-4fb45730b889" containerName="mariadb-account-create-update" Jan 22 10:43:45 crc kubenswrapper[4752]: E0122 10:43:45.008999 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1709564-73da-4021-8b4a-865eb06625c0" containerName="mariadb-database-create" Jan 22 10:43:45 crc kubenswrapper[4752]: I0122 10:43:45.009006 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1709564-73da-4021-8b4a-865eb06625c0" containerName="mariadb-database-create" Jan 22 10:43:45 crc kubenswrapper[4752]: E0122 10:43:45.009029 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33a79a45-be65-4f75-be4a-75f5f1f87ce3" containerName="mariadb-account-create-update" Jan 22 10:43:45 crc kubenswrapper[4752]: I0122 10:43:45.009037 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="33a79a45-be65-4f75-be4a-75f5f1f87ce3" containerName="mariadb-account-create-update" Jan 22 10:43:45 crc kubenswrapper[4752]: I0122 10:43:45.009230 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed2637ca-135b-4963-849a-d95c79b04aea" containerName="mariadb-account-create-update" Jan 22 10:43:45 crc kubenswrapper[4752]: I0122 10:43:45.009247 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd0a11ce-1fe6-4f39-b8f5-4fb45730b889" containerName="mariadb-account-create-update" Jan 22 10:43:45 crc kubenswrapper[4752]: I0122 10:43:45.009271 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="33a79a45-be65-4f75-be4a-75f5f1f87ce3" containerName="mariadb-account-create-update" Jan 22 10:43:45 crc kubenswrapper[4752]: I0122 10:43:45.009283 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1709564-73da-4021-8b4a-865eb06625c0" containerName="mariadb-database-create" Jan 22 10:43:45 crc kubenswrapper[4752]: I0122 10:43:45.009291 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="6604eed0-a1d2-4ac2-9dba-66e4228899ec" containerName="mariadb-database-create" Jan 22 10:43:45 crc kubenswrapper[4752]: I0122 10:43:45.009982 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2nc8f" Jan 22 10:43:45 crc kubenswrapper[4752]: I0122 10:43:45.013187 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 22 10:43:45 crc kubenswrapper[4752]: I0122 10:43:45.013444 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-7npjg" Jan 22 10:43:45 crc kubenswrapper[4752]: I0122 10:43:45.026364 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-2nc8f"] Jan 22 10:43:45 crc kubenswrapper[4752]: I0122 10:43:45.057224 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/849caaec-8756-46a5-b544-8e914c0b022b-config-data\") pod \"glance-db-sync-2nc8f\" (UID: \"849caaec-8756-46a5-b544-8e914c0b022b\") " pod="openstack/glance-db-sync-2nc8f" Jan 22 10:43:45 crc kubenswrapper[4752]: I0122 10:43:45.057314 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/849caaec-8756-46a5-b544-8e914c0b022b-db-sync-config-data\") pod \"glance-db-sync-2nc8f\" (UID: \"849caaec-8756-46a5-b544-8e914c0b022b\") " pod="openstack/glance-db-sync-2nc8f" Jan 22 10:43:45 crc kubenswrapper[4752]: I0122 10:43:45.057395 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/849caaec-8756-46a5-b544-8e914c0b022b-combined-ca-bundle\") pod \"glance-db-sync-2nc8f\" (UID: \"849caaec-8756-46a5-b544-8e914c0b022b\") " pod="openstack/glance-db-sync-2nc8f" Jan 22 10:43:45 crc kubenswrapper[4752]: I0122 10:43:45.057439 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxpqg\" (UniqueName: \"kubernetes.io/projected/849caaec-8756-46a5-b544-8e914c0b022b-kube-api-access-fxpqg\") pod \"glance-db-sync-2nc8f\" (UID: \"849caaec-8756-46a5-b544-8e914c0b022b\") " pod="openstack/glance-db-sync-2nc8f" Jan 22 10:43:45 crc kubenswrapper[4752]: I0122 10:43:45.159369 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxpqg\" (UniqueName: \"kubernetes.io/projected/849caaec-8756-46a5-b544-8e914c0b022b-kube-api-access-fxpqg\") pod \"glance-db-sync-2nc8f\" (UID: \"849caaec-8756-46a5-b544-8e914c0b022b\") " pod="openstack/glance-db-sync-2nc8f" Jan 22 10:43:45 crc kubenswrapper[4752]: I0122 10:43:45.159523 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/849caaec-8756-46a5-b544-8e914c0b022b-config-data\") pod \"glance-db-sync-2nc8f\" (UID: \"849caaec-8756-46a5-b544-8e914c0b022b\") " pod="openstack/glance-db-sync-2nc8f" Jan 22 10:43:45 crc kubenswrapper[4752]: I0122 10:43:45.159551 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/849caaec-8756-46a5-b544-8e914c0b022b-db-sync-config-data\") pod \"glance-db-sync-2nc8f\" (UID: \"849caaec-8756-46a5-b544-8e914c0b022b\") " pod="openstack/glance-db-sync-2nc8f" Jan 22 10:43:45 crc kubenswrapper[4752]: I0122 10:43:45.159613 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/849caaec-8756-46a5-b544-8e914c0b022b-combined-ca-bundle\") pod \"glance-db-sync-2nc8f\" (UID: \"849caaec-8756-46a5-b544-8e914c0b022b\") " pod="openstack/glance-db-sync-2nc8f" Jan 22 10:43:45 crc kubenswrapper[4752]: I0122 10:43:45.166603 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/849caaec-8756-46a5-b544-8e914c0b022b-combined-ca-bundle\") pod \"glance-db-sync-2nc8f\" (UID: \"849caaec-8756-46a5-b544-8e914c0b022b\") " pod="openstack/glance-db-sync-2nc8f" Jan 22 10:43:45 crc kubenswrapper[4752]: I0122 10:43:45.166880 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/849caaec-8756-46a5-b544-8e914c0b022b-config-data\") pod \"glance-db-sync-2nc8f\" (UID: \"849caaec-8756-46a5-b544-8e914c0b022b\") " pod="openstack/glance-db-sync-2nc8f" Jan 22 10:43:45 crc kubenswrapper[4752]: I0122 10:43:45.170608 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/849caaec-8756-46a5-b544-8e914c0b022b-db-sync-config-data\") pod \"glance-db-sync-2nc8f\" (UID: \"849caaec-8756-46a5-b544-8e914c0b022b\") " pod="openstack/glance-db-sync-2nc8f" Jan 22 10:43:45 crc kubenswrapper[4752]: I0122 10:43:45.191628 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxpqg\" (UniqueName: \"kubernetes.io/projected/849caaec-8756-46a5-b544-8e914c0b022b-kube-api-access-fxpqg\") pod \"glance-db-sync-2nc8f\" (UID: \"849caaec-8756-46a5-b544-8e914c0b022b\") " pod="openstack/glance-db-sync-2nc8f" Jan 22 10:43:45 crc kubenswrapper[4752]: I0122 10:43:45.331763 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2nc8f" Jan 22 10:43:45 crc kubenswrapper[4752]: I0122 10:43:45.905938 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-2nc8f"] Jan 22 10:43:46 crc kubenswrapper[4752]: I0122 10:43:46.776633 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2nc8f" event={"ID":"849caaec-8756-46a5-b544-8e914c0b022b","Type":"ContainerStarted","Data":"9f23937f5e945b6d1aa2a8955122268d39f20998508e6615d1bc6164eb088afb"} Jan 22 10:43:47 crc kubenswrapper[4752]: I0122 10:43:47.786712 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-vbsj2" event={"ID":"a1192a1d-8861-4ce2-bfee-1360fecff6e7","Type":"ContainerStarted","Data":"012473e05bb03dc9ef2dc19a2b157e7bf99d6ec67810ff680fcec83a0370879b"} Jan 22 10:43:47 crc kubenswrapper[4752]: I0122 10:43:47.834190 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-vbsj2" podStartSLOduration=1.843627259 podStartE2EDuration="29.834164269s" podCreationTimestamp="2026-01-22 10:43:18 +0000 UTC" firstStartedPulling="2026-01-22 10:43:19.252587252 +0000 UTC m=+1078.482530160" lastFinishedPulling="2026-01-22 10:43:47.243124242 +0000 UTC m=+1106.473067170" observedRunningTime="2026-01-22 10:43:47.804695923 +0000 UTC m=+1107.034638871" watchObservedRunningTime="2026-01-22 10:43:47.834164269 +0000 UTC m=+1107.064107177" Jan 22 10:43:49 crc kubenswrapper[4752]: I0122 10:43:49.839751 4752 generic.go:334] "Generic (PLEG): container finished" podID="7558d250-f7b6-49f0-90a1-b524e8b0d376" containerID="49f3721381dd33065b70664b5f481a351b1348b0cf38cf24d8a8edf0510036d7" exitCode=0 Jan 22 10:43:49 crc kubenswrapper[4752]: I0122 10:43:49.839946 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7558d250-f7b6-49f0-90a1-b524e8b0d376","Type":"ContainerDied","Data":"49f3721381dd33065b70664b5f481a351b1348b0cf38cf24d8a8edf0510036d7"} Jan 22 10:43:50 crc kubenswrapper[4752]: I0122 10:43:50.854464 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7558d250-f7b6-49f0-90a1-b524e8b0d376","Type":"ContainerStarted","Data":"dd22454acf1ffdee5d505676f7bb5ead04cdd477a4699a5c67f10f46e5dc3731"} Jan 22 10:43:53 crc kubenswrapper[4752]: I0122 10:43:53.884020 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7558d250-f7b6-49f0-90a1-b524e8b0d376","Type":"ContainerStarted","Data":"280b41b08ad50fcf00b2a110365a3b6b55f1f50a7b6767e63264233cb177efa1"} Jan 22 10:43:53 crc kubenswrapper[4752]: I0122 10:43:53.885824 4752 generic.go:334] "Generic (PLEG): container finished" podID="446a849f-df12-4b01-8457-dd5c828dd567" containerID="c6856fed9b6819ff1583a4aa95c4cf5e495e35246e5f24e5cdb013b1fb81f02b" exitCode=0 Jan 22 10:43:53 crc kubenswrapper[4752]: I0122 10:43:53.885878 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-j4drz" event={"ID":"446a849f-df12-4b01-8457-dd5c828dd567","Type":"ContainerDied","Data":"c6856fed9b6819ff1583a4aa95c4cf5e495e35246e5f24e5cdb013b1fb81f02b"} Jan 22 10:43:57 crc kubenswrapper[4752]: I0122 10:43:57.926102 4752 generic.go:334] "Generic (PLEG): container finished" podID="a1192a1d-8861-4ce2-bfee-1360fecff6e7" containerID="012473e05bb03dc9ef2dc19a2b157e7bf99d6ec67810ff680fcec83a0370879b" exitCode=0 Jan 22 10:43:57 crc kubenswrapper[4752]: I0122 10:43:57.926209 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-vbsj2" event={"ID":"a1192a1d-8861-4ce2-bfee-1360fecff6e7","Type":"ContainerDied","Data":"012473e05bb03dc9ef2dc19a2b157e7bf99d6ec67810ff680fcec83a0370879b"} Jan 22 10:43:58 crc kubenswrapper[4752]: I0122 10:43:58.824469 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-j4drz" Jan 22 10:43:58 crc kubenswrapper[4752]: I0122 10:43:58.928555 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/446a849f-df12-4b01-8457-dd5c828dd567-combined-ca-bundle\") pod \"446a849f-df12-4b01-8457-dd5c828dd567\" (UID: \"446a849f-df12-4b01-8457-dd5c828dd567\") " Jan 22 10:43:58 crc kubenswrapper[4752]: I0122 10:43:58.928773 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/446a849f-df12-4b01-8457-dd5c828dd567-config-data\") pod \"446a849f-df12-4b01-8457-dd5c828dd567\" (UID: \"446a849f-df12-4b01-8457-dd5c828dd567\") " Jan 22 10:43:58 crc kubenswrapper[4752]: I0122 10:43:58.933649 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q64hj\" (UniqueName: \"kubernetes.io/projected/446a849f-df12-4b01-8457-dd5c828dd567-kube-api-access-q64hj\") pod \"446a849f-df12-4b01-8457-dd5c828dd567\" (UID: \"446a849f-df12-4b01-8457-dd5c828dd567\") " Jan 22 10:43:58 crc kubenswrapper[4752]: I0122 10:43:58.937185 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/446a849f-df12-4b01-8457-dd5c828dd567-kube-api-access-q64hj" (OuterVolumeSpecName: "kube-api-access-q64hj") pod "446a849f-df12-4b01-8457-dd5c828dd567" (UID: "446a849f-df12-4b01-8457-dd5c828dd567"). InnerVolumeSpecName "kube-api-access-q64hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:43:58 crc kubenswrapper[4752]: I0122 10:43:58.942498 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-j4drz" Jan 22 10:43:58 crc kubenswrapper[4752]: I0122 10:43:58.942501 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-j4drz" event={"ID":"446a849f-df12-4b01-8457-dd5c828dd567","Type":"ContainerDied","Data":"a25383bfb8a5e58275a494fcead6ec12a8349fa12fa9d6245d67b454f02c4a29"} Jan 22 10:43:58 crc kubenswrapper[4752]: I0122 10:43:58.942567 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a25383bfb8a5e58275a494fcead6ec12a8349fa12fa9d6245d67b454f02c4a29" Jan 22 10:43:58 crc kubenswrapper[4752]: I0122 10:43:58.971780 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/446a849f-df12-4b01-8457-dd5c828dd567-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "446a849f-df12-4b01-8457-dd5c828dd567" (UID: "446a849f-df12-4b01-8457-dd5c828dd567"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:43:58 crc kubenswrapper[4752]: I0122 10:43:58.991052 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/446a849f-df12-4b01-8457-dd5c828dd567-config-data" (OuterVolumeSpecName: "config-data") pod "446a849f-df12-4b01-8457-dd5c828dd567" (UID: "446a849f-df12-4b01-8457-dd5c828dd567"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:43:59 crc kubenswrapper[4752]: I0122 10:43:59.036723 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/446a849f-df12-4b01-8457-dd5c828dd567-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:59 crc kubenswrapper[4752]: I0122 10:43:59.036762 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q64hj\" (UniqueName: \"kubernetes.io/projected/446a849f-df12-4b01-8457-dd5c828dd567-kube-api-access-q64hj\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:59 crc kubenswrapper[4752]: I0122 10:43:59.036775 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/446a849f-df12-4b01-8457-dd5c828dd567-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:59 crc kubenswrapper[4752]: I0122 10:43:59.264507 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-vbsj2" Jan 22 10:43:59 crc kubenswrapper[4752]: I0122 10:43:59.341202 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a1192a1d-8861-4ce2-bfee-1360fecff6e7-db-sync-config-data\") pod \"a1192a1d-8861-4ce2-bfee-1360fecff6e7\" (UID: \"a1192a1d-8861-4ce2-bfee-1360fecff6e7\") " Jan 22 10:43:59 crc kubenswrapper[4752]: I0122 10:43:59.341242 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1192a1d-8861-4ce2-bfee-1360fecff6e7-config-data\") pod \"a1192a1d-8861-4ce2-bfee-1360fecff6e7\" (UID: \"a1192a1d-8861-4ce2-bfee-1360fecff6e7\") " Jan 22 10:43:59 crc kubenswrapper[4752]: I0122 10:43:59.341274 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1192a1d-8861-4ce2-bfee-1360fecff6e7-combined-ca-bundle\") pod \"a1192a1d-8861-4ce2-bfee-1360fecff6e7\" (UID: \"a1192a1d-8861-4ce2-bfee-1360fecff6e7\") " Jan 22 10:43:59 crc kubenswrapper[4752]: I0122 10:43:59.341370 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8frrs\" (UniqueName: \"kubernetes.io/projected/a1192a1d-8861-4ce2-bfee-1360fecff6e7-kube-api-access-8frrs\") pod \"a1192a1d-8861-4ce2-bfee-1360fecff6e7\" (UID: \"a1192a1d-8861-4ce2-bfee-1360fecff6e7\") " Jan 22 10:43:59 crc kubenswrapper[4752]: I0122 10:43:59.344505 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1192a1d-8861-4ce2-bfee-1360fecff6e7-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a1192a1d-8861-4ce2-bfee-1360fecff6e7" (UID: "a1192a1d-8861-4ce2-bfee-1360fecff6e7"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:43:59 crc kubenswrapper[4752]: I0122 10:43:59.344912 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1192a1d-8861-4ce2-bfee-1360fecff6e7-kube-api-access-8frrs" (OuterVolumeSpecName: "kube-api-access-8frrs") pod "a1192a1d-8861-4ce2-bfee-1360fecff6e7" (UID: "a1192a1d-8861-4ce2-bfee-1360fecff6e7"). InnerVolumeSpecName "kube-api-access-8frrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:43:59 crc kubenswrapper[4752]: I0122 10:43:59.370304 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1192a1d-8861-4ce2-bfee-1360fecff6e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1192a1d-8861-4ce2-bfee-1360fecff6e7" (UID: "a1192a1d-8861-4ce2-bfee-1360fecff6e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:43:59 crc kubenswrapper[4752]: I0122 10:43:59.391294 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1192a1d-8861-4ce2-bfee-1360fecff6e7-config-data" (OuterVolumeSpecName: "config-data") pod "a1192a1d-8861-4ce2-bfee-1360fecff6e7" (UID: "a1192a1d-8861-4ce2-bfee-1360fecff6e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:43:59 crc kubenswrapper[4752]: I0122 10:43:59.443698 4752 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a1192a1d-8861-4ce2-bfee-1360fecff6e7-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:59 crc kubenswrapper[4752]: I0122 10:43:59.444094 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1192a1d-8861-4ce2-bfee-1360fecff6e7-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:59 crc kubenswrapper[4752]: I0122 10:43:59.444111 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1192a1d-8861-4ce2-bfee-1360fecff6e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:59 crc kubenswrapper[4752]: I0122 10:43:59.444128 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8frrs\" (UniqueName: \"kubernetes.io/projected/a1192a1d-8861-4ce2-bfee-1360fecff6e7-kube-api-access-8frrs\") on node \"crc\" DevicePath \"\"" Jan 22 10:43:59 crc kubenswrapper[4752]: I0122 10:43:59.953536 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2nc8f" event={"ID":"849caaec-8756-46a5-b544-8e914c0b022b","Type":"ContainerStarted","Data":"c22e2c024ebd331afd1ed05d3365b2ce8a025da7044071fe519e1bfb6951d935"} Jan 22 10:43:59 crc kubenswrapper[4752]: I0122 10:43:59.955043 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-vbsj2" event={"ID":"a1192a1d-8861-4ce2-bfee-1360fecff6e7","Type":"ContainerDied","Data":"4abd87f4669ed9f467b6e6217f14893cceab8a4f96f6c0633827c961e97f4c01"} Jan 22 10:43:59 crc kubenswrapper[4752]: I0122 10:43:59.955196 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4abd87f4669ed9f467b6e6217f14893cceab8a4f96f6c0633827c961e97f4c01" Jan 22 10:43:59 crc kubenswrapper[4752]: I0122 10:43:59.955365 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-vbsj2" Jan 22 10:43:59 crc kubenswrapper[4752]: I0122 10:43:59.959280 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7558d250-f7b6-49f0-90a1-b524e8b0d376","Type":"ContainerStarted","Data":"b335647437b507a0cd725ea0b3d0c9f4cb85f06c07d4624bae0c7576adfee0cf"} Jan 22 10:43:59 crc kubenswrapper[4752]: I0122 10:43:59.982482 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-2nc8f" podStartSLOduration=3.09185403 podStartE2EDuration="15.982456928s" podCreationTimestamp="2026-01-22 10:43:44 +0000 UTC" firstStartedPulling="2026-01-22 10:43:45.920079432 +0000 UTC m=+1105.150022340" lastFinishedPulling="2026-01-22 10:43:58.81068233 +0000 UTC m=+1118.040625238" observedRunningTime="2026-01-22 10:43:59.979880874 +0000 UTC m=+1119.209823782" watchObservedRunningTime="2026-01-22 10:43:59.982456928 +0000 UTC m=+1119.212399836" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.039816 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=24.039802412 podStartE2EDuration="24.039802412s" podCreationTimestamp="2026-01-22 10:43:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:44:00.038475929 +0000 UTC m=+1119.268418837" watchObservedRunningTime="2026-01-22 10:44:00.039802412 +0000 UTC m=+1119.269745320" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.125155 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ff6fd6c5-fl48d"] Jan 22 10:44:00 crc kubenswrapper[4752]: E0122 10:44:00.125586 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="446a849f-df12-4b01-8457-dd5c828dd567" containerName="keystone-db-sync" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.125634 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="446a849f-df12-4b01-8457-dd5c828dd567" containerName="keystone-db-sync" Jan 22 10:44:00 crc kubenswrapper[4752]: E0122 10:44:00.125667 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1192a1d-8861-4ce2-bfee-1360fecff6e7" containerName="watcher-db-sync" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.125676 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1192a1d-8861-4ce2-bfee-1360fecff6e7" containerName="watcher-db-sync" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.125877 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1192a1d-8861-4ce2-bfee-1360fecff6e7" containerName="watcher-db-sync" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.125904 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="446a849f-df12-4b01-8457-dd5c828dd567" containerName="keystone-db-sync" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.135214 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ff6fd6c5-fl48d" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.148200 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ff6fd6c5-fl48d"] Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.157218 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38fb4345-e766-46b0-85ee-f05095a67208-ovsdbserver-sb\") pod \"dnsmasq-dns-5ff6fd6c5-fl48d\" (UID: \"38fb4345-e766-46b0-85ee-f05095a67208\") " pod="openstack/dnsmasq-dns-5ff6fd6c5-fl48d" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.157298 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38fb4345-e766-46b0-85ee-f05095a67208-config\") pod \"dnsmasq-dns-5ff6fd6c5-fl48d\" (UID: \"38fb4345-e766-46b0-85ee-f05095a67208\") " pod="openstack/dnsmasq-dns-5ff6fd6c5-fl48d" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.157385 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4pl4\" (UniqueName: \"kubernetes.io/projected/38fb4345-e766-46b0-85ee-f05095a67208-kube-api-access-l4pl4\") pod \"dnsmasq-dns-5ff6fd6c5-fl48d\" (UID: \"38fb4345-e766-46b0-85ee-f05095a67208\") " pod="openstack/dnsmasq-dns-5ff6fd6c5-fl48d" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.157413 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38fb4345-e766-46b0-85ee-f05095a67208-dns-swift-storage-0\") pod \"dnsmasq-dns-5ff6fd6c5-fl48d\" (UID: \"38fb4345-e766-46b0-85ee-f05095a67208\") " pod="openstack/dnsmasq-dns-5ff6fd6c5-fl48d" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.157567 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38fb4345-e766-46b0-85ee-f05095a67208-dns-svc\") pod \"dnsmasq-dns-5ff6fd6c5-fl48d\" (UID: \"38fb4345-e766-46b0-85ee-f05095a67208\") " pod="openstack/dnsmasq-dns-5ff6fd6c5-fl48d" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.157678 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38fb4345-e766-46b0-85ee-f05095a67208-ovsdbserver-nb\") pod \"dnsmasq-dns-5ff6fd6c5-fl48d\" (UID: \"38fb4345-e766-46b0-85ee-f05095a67208\") " pod="openstack/dnsmasq-dns-5ff6fd6c5-fl48d" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.207655 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-4nwnj"] Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.208649 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4nwnj" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.214232 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.214314 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bj8bx" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.214431 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.214432 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.218884 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.242030 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4nwnj"] Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.260797 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4pl4\" (UniqueName: \"kubernetes.io/projected/38fb4345-e766-46b0-85ee-f05095a67208-kube-api-access-l4pl4\") pod \"dnsmasq-dns-5ff6fd6c5-fl48d\" (UID: \"38fb4345-e766-46b0-85ee-f05095a67208\") " pod="openstack/dnsmasq-dns-5ff6fd6c5-fl48d" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.260846 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38fb4345-e766-46b0-85ee-f05095a67208-dns-swift-storage-0\") pod \"dnsmasq-dns-5ff6fd6c5-fl48d\" (UID: \"38fb4345-e766-46b0-85ee-f05095a67208\") " pod="openstack/dnsmasq-dns-5ff6fd6c5-fl48d" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.260884 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38fb4345-e766-46b0-85ee-f05095a67208-dns-svc\") pod \"dnsmasq-dns-5ff6fd6c5-fl48d\" (UID: \"38fb4345-e766-46b0-85ee-f05095a67208\") " pod="openstack/dnsmasq-dns-5ff6fd6c5-fl48d" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.260905 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38fb4345-e766-46b0-85ee-f05095a67208-ovsdbserver-nb\") pod \"dnsmasq-dns-5ff6fd6c5-fl48d\" (UID: \"38fb4345-e766-46b0-85ee-f05095a67208\") " pod="openstack/dnsmasq-dns-5ff6fd6c5-fl48d" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.260940 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7wd4\" (UniqueName: \"kubernetes.io/projected/f7c9f151-300c-48c7-b137-6a870ca61b60-kube-api-access-d7wd4\") pod \"keystone-bootstrap-4nwnj\" (UID: \"f7c9f151-300c-48c7-b137-6a870ca61b60\") " pod="openstack/keystone-bootstrap-4nwnj" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.260981 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7c9f151-300c-48c7-b137-6a870ca61b60-combined-ca-bundle\") pod \"keystone-bootstrap-4nwnj\" (UID: \"f7c9f151-300c-48c7-b137-6a870ca61b60\") " pod="openstack/keystone-bootstrap-4nwnj" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.261021 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f7c9f151-300c-48c7-b137-6a870ca61b60-fernet-keys\") pod \"keystone-bootstrap-4nwnj\" (UID: \"f7c9f151-300c-48c7-b137-6a870ca61b60\") " pod="openstack/keystone-bootstrap-4nwnj" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.261053 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38fb4345-e766-46b0-85ee-f05095a67208-ovsdbserver-sb\") pod \"dnsmasq-dns-5ff6fd6c5-fl48d\" (UID: \"38fb4345-e766-46b0-85ee-f05095a67208\") " pod="openstack/dnsmasq-dns-5ff6fd6c5-fl48d" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.261080 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38fb4345-e766-46b0-85ee-f05095a67208-config\") pod \"dnsmasq-dns-5ff6fd6c5-fl48d\" (UID: \"38fb4345-e766-46b0-85ee-f05095a67208\") " pod="openstack/dnsmasq-dns-5ff6fd6c5-fl48d" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.261097 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f7c9f151-300c-48c7-b137-6a870ca61b60-credential-keys\") pod \"keystone-bootstrap-4nwnj\" (UID: \"f7c9f151-300c-48c7-b137-6a870ca61b60\") " pod="openstack/keystone-bootstrap-4nwnj" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.261116 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7c9f151-300c-48c7-b137-6a870ca61b60-config-data\") pod \"keystone-bootstrap-4nwnj\" (UID: \"f7c9f151-300c-48c7-b137-6a870ca61b60\") " pod="openstack/keystone-bootstrap-4nwnj" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.261135 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7c9f151-300c-48c7-b137-6a870ca61b60-scripts\") pod \"keystone-bootstrap-4nwnj\" (UID: \"f7c9f151-300c-48c7-b137-6a870ca61b60\") " pod="openstack/keystone-bootstrap-4nwnj" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.262178 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38fb4345-e766-46b0-85ee-f05095a67208-dns-swift-storage-0\") pod \"dnsmasq-dns-5ff6fd6c5-fl48d\" (UID: \"38fb4345-e766-46b0-85ee-f05095a67208\") " pod="openstack/dnsmasq-dns-5ff6fd6c5-fl48d" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.262245 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38fb4345-e766-46b0-85ee-f05095a67208-dns-svc\") pod \"dnsmasq-dns-5ff6fd6c5-fl48d\" (UID: \"38fb4345-e766-46b0-85ee-f05095a67208\") " pod="openstack/dnsmasq-dns-5ff6fd6c5-fl48d" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.262716 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38fb4345-e766-46b0-85ee-f05095a67208-ovsdbserver-sb\") pod \"dnsmasq-dns-5ff6fd6c5-fl48d\" (UID: \"38fb4345-e766-46b0-85ee-f05095a67208\") " pod="openstack/dnsmasq-dns-5ff6fd6c5-fl48d" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.262810 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38fb4345-e766-46b0-85ee-f05095a67208-ovsdbserver-nb\") pod \"dnsmasq-dns-5ff6fd6c5-fl48d\" (UID: \"38fb4345-e766-46b0-85ee-f05095a67208\") " pod="openstack/dnsmasq-dns-5ff6fd6c5-fl48d" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.263284 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38fb4345-e766-46b0-85ee-f05095a67208-config\") pod \"dnsmasq-dns-5ff6fd6c5-fl48d\" (UID: \"38fb4345-e766-46b0-85ee-f05095a67208\") " pod="openstack/dnsmasq-dns-5ff6fd6c5-fl48d" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.314681 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4pl4\" (UniqueName: \"kubernetes.io/projected/38fb4345-e766-46b0-85ee-f05095a67208-kube-api-access-l4pl4\") pod \"dnsmasq-dns-5ff6fd6c5-fl48d\" (UID: \"38fb4345-e766-46b0-85ee-f05095a67208\") " pod="openstack/dnsmasq-dns-5ff6fd6c5-fl48d" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.315448 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.316685 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.338124 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.342199 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-86g9j" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.344948 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.366235 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7c9f151-300c-48c7-b137-6a870ca61b60-combined-ca-bundle\") pod \"keystone-bootstrap-4nwnj\" (UID: \"f7c9f151-300c-48c7-b137-6a870ca61b60\") " pod="openstack/keystone-bootstrap-4nwnj" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.366293 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03c5e4ed-4c27-447e-b8ea-8853f84742e3-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"03c5e4ed-4c27-447e-b8ea-8853f84742e3\") " pod="openstack/watcher-applier-0" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.366312 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f7c9f151-300c-48c7-b137-6a870ca61b60-fernet-keys\") pod \"keystone-bootstrap-4nwnj\" (UID: \"f7c9f151-300c-48c7-b137-6a870ca61b60\") " pod="openstack/keystone-bootstrap-4nwnj" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.366363 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03c5e4ed-4c27-447e-b8ea-8853f84742e3-config-data\") pod \"watcher-applier-0\" (UID: \"03c5e4ed-4c27-447e-b8ea-8853f84742e3\") " pod="openstack/watcher-applier-0" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.366393 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f7c9f151-300c-48c7-b137-6a870ca61b60-credential-keys\") pod \"keystone-bootstrap-4nwnj\" (UID: \"f7c9f151-300c-48c7-b137-6a870ca61b60\") " pod="openstack/keystone-bootstrap-4nwnj" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.366416 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7c9f151-300c-48c7-b137-6a870ca61b60-config-data\") pod \"keystone-bootstrap-4nwnj\" (UID: \"f7c9f151-300c-48c7-b137-6a870ca61b60\") " pod="openstack/keystone-bootstrap-4nwnj" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.366440 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7c9f151-300c-48c7-b137-6a870ca61b60-scripts\") pod \"keystone-bootstrap-4nwnj\" (UID: \"f7c9f151-300c-48c7-b137-6a870ca61b60\") " pod="openstack/keystone-bootstrap-4nwnj" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.366488 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03c5e4ed-4c27-447e-b8ea-8853f84742e3-logs\") pod \"watcher-applier-0\" (UID: \"03c5e4ed-4c27-447e-b8ea-8853f84742e3\") " pod="openstack/watcher-applier-0" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.366539 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7wd4\" (UniqueName: \"kubernetes.io/projected/f7c9f151-300c-48c7-b137-6a870ca61b60-kube-api-access-d7wd4\") pod \"keystone-bootstrap-4nwnj\" (UID: \"f7c9f151-300c-48c7-b137-6a870ca61b60\") " pod="openstack/keystone-bootstrap-4nwnj" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.366559 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tw7k\" (UniqueName: \"kubernetes.io/projected/03c5e4ed-4c27-447e-b8ea-8853f84742e3-kube-api-access-6tw7k\") pod \"watcher-applier-0\" (UID: \"03c5e4ed-4c27-447e-b8ea-8853f84742e3\") " pod="openstack/watcher-applier-0" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.376307 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f7c9f151-300c-48c7-b137-6a870ca61b60-credential-keys\") pod \"keystone-bootstrap-4nwnj\" (UID: \"f7c9f151-300c-48c7-b137-6a870ca61b60\") " pod="openstack/keystone-bootstrap-4nwnj" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.385276 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7c9f151-300c-48c7-b137-6a870ca61b60-scripts\") pod \"keystone-bootstrap-4nwnj\" (UID: \"f7c9f151-300c-48c7-b137-6a870ca61b60\") " pod="openstack/keystone-bootstrap-4nwnj" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.392567 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7c9f151-300c-48c7-b137-6a870ca61b60-config-data\") pod \"keystone-bootstrap-4nwnj\" (UID: \"f7c9f151-300c-48c7-b137-6a870ca61b60\") " pod="openstack/keystone-bootstrap-4nwnj" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.393331 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7c9f151-300c-48c7-b137-6a870ca61b60-combined-ca-bundle\") pod \"keystone-bootstrap-4nwnj\" (UID: \"f7c9f151-300c-48c7-b137-6a870ca61b60\") " pod="openstack/keystone-bootstrap-4nwnj" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.396358 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f7c9f151-300c-48c7-b137-6a870ca61b60-fernet-keys\") pod \"keystone-bootstrap-4nwnj\" (UID: \"f7c9f151-300c-48c7-b137-6a870ca61b60\") " pod="openstack/keystone-bootstrap-4nwnj" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.455444 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7wd4\" (UniqueName: \"kubernetes.io/projected/f7c9f151-300c-48c7-b137-6a870ca61b60-kube-api-access-d7wd4\") pod \"keystone-bootstrap-4nwnj\" (UID: \"f7c9f151-300c-48c7-b137-6a870ca61b60\") " pod="openstack/keystone-bootstrap-4nwnj" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.468772 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03c5e4ed-4c27-447e-b8ea-8853f84742e3-config-data\") pod \"watcher-applier-0\" (UID: \"03c5e4ed-4c27-447e-b8ea-8853f84742e3\") " pod="openstack/watcher-applier-0" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.468898 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03c5e4ed-4c27-447e-b8ea-8853f84742e3-logs\") pod \"watcher-applier-0\" (UID: \"03c5e4ed-4c27-447e-b8ea-8853f84742e3\") " pod="openstack/watcher-applier-0" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.468959 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tw7k\" (UniqueName: \"kubernetes.io/projected/03c5e4ed-4c27-447e-b8ea-8853f84742e3-kube-api-access-6tw7k\") pod \"watcher-applier-0\" (UID: \"03c5e4ed-4c27-447e-b8ea-8853f84742e3\") " pod="openstack/watcher-applier-0" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.469017 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03c5e4ed-4c27-447e-b8ea-8853f84742e3-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"03c5e4ed-4c27-447e-b8ea-8853f84742e3\") " pod="openstack/watcher-applier-0" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.476345 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03c5e4ed-4c27-447e-b8ea-8853f84742e3-logs\") pod \"watcher-applier-0\" (UID: \"03c5e4ed-4c27-447e-b8ea-8853f84742e3\") " pod="openstack/watcher-applier-0" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.483784 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03c5e4ed-4c27-447e-b8ea-8853f84742e3-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"03c5e4ed-4c27-447e-b8ea-8853f84742e3\") " pod="openstack/watcher-applier-0" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.490762 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03c5e4ed-4c27-447e-b8ea-8853f84742e3-config-data\") pod \"watcher-applier-0\" (UID: \"03c5e4ed-4c27-447e-b8ea-8853f84742e3\") " pod="openstack/watcher-applier-0" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.491796 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-6vrvk"] Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.493358 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ff6fd6c5-fl48d" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.494979 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6vrvk" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.534277 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4nwnj" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.561701 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-gl654" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.561923 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.562054 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.571973 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-6vrvk"] Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.583936 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7895f968f5-rgkz6"] Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.585580 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7895f968f5-rgkz6" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.595832 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.597230 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.622132 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.622364 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-ddmf2" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.622752 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.622894 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.623427 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tw7k\" (UniqueName: \"kubernetes.io/projected/03c5e4ed-4c27-447e-b8ea-8853f84742e3-kube-api-access-6tw7k\") pod \"watcher-applier-0\" (UID: \"03c5e4ed-4c27-447e-b8ea-8853f84742e3\") " pod="openstack/watcher-applier-0" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.625699 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.632844 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7895f968f5-rgkz6"] Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.664390 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-fwcgf"] Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.665656 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fwcgf" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.680536 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.685123 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.686664 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e48968c0-ac21-49af-9161-19bf5e37c9eb-scripts\") pod \"cinder-db-sync-6vrvk\" (UID: \"e48968c0-ac21-49af-9161-19bf5e37c9eb\") " pod="openstack/cinder-db-sync-6vrvk" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.686696 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9btpv\" (UniqueName: \"kubernetes.io/projected/e48968c0-ac21-49af-9161-19bf5e37c9eb-kube-api-access-9btpv\") pod \"cinder-db-sync-6vrvk\" (UID: \"e48968c0-ac21-49af-9161-19bf5e37c9eb\") " pod="openstack/cinder-db-sync-6vrvk" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.686712 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35cf59e4-f205-4f9f-90ba-358c0fb38048-config-data\") pod \"watcher-decision-engine-0\" (UID: \"35cf59e4-f205-4f9f-90ba-358c0fb38048\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.686732 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsq9q\" (UniqueName: \"kubernetes.io/projected/751b5593-10c9-46a0-bb4d-141ecbc13e10-kube-api-access-vsq9q\") pod \"neutron-db-sync-fwcgf\" (UID: \"751b5593-10c9-46a0-bb4d-141ecbc13e10\") " pod="openstack/neutron-db-sync-fwcgf" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.686763 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29162ca8-7eeb-4727-bd25-1e129480c0a9-config-data\") pod \"horizon-7895f968f5-rgkz6\" (UID: \"29162ca8-7eeb-4727-bd25-1e129480c0a9\") " pod="openstack/horizon-7895f968f5-rgkz6" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.686787 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/751b5593-10c9-46a0-bb4d-141ecbc13e10-config\") pod \"neutron-db-sync-fwcgf\" (UID: \"751b5593-10c9-46a0-bb4d-141ecbc13e10\") " pod="openstack/neutron-db-sync-fwcgf" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.686803 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29162ca8-7eeb-4727-bd25-1e129480c0a9-scripts\") pod \"horizon-7895f968f5-rgkz6\" (UID: \"29162ca8-7eeb-4727-bd25-1e129480c0a9\") " pod="openstack/horizon-7895f968f5-rgkz6" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.686821 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35cf59e4-f205-4f9f-90ba-358c0fb38048-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"35cf59e4-f205-4f9f-90ba-358c0fb38048\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.686838 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e48968c0-ac21-49af-9161-19bf5e37c9eb-db-sync-config-data\") pod \"cinder-db-sync-6vrvk\" (UID: \"e48968c0-ac21-49af-9161-19bf5e37c9eb\") " pod="openstack/cinder-db-sync-6vrvk" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.686872 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pttm4\" (UniqueName: \"kubernetes.io/projected/29162ca8-7eeb-4727-bd25-1e129480c0a9-kube-api-access-pttm4\") pod \"horizon-7895f968f5-rgkz6\" (UID: \"29162ca8-7eeb-4727-bd25-1e129480c0a9\") " pod="openstack/horizon-7895f968f5-rgkz6" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.686888 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29162ca8-7eeb-4727-bd25-1e129480c0a9-logs\") pod \"horizon-7895f968f5-rgkz6\" (UID: \"29162ca8-7eeb-4727-bd25-1e129480c0a9\") " pod="openstack/horizon-7895f968f5-rgkz6" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.686907 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/29162ca8-7eeb-4727-bd25-1e129480c0a9-horizon-secret-key\") pod \"horizon-7895f968f5-rgkz6\" (UID: \"29162ca8-7eeb-4727-bd25-1e129480c0a9\") " pod="openstack/horizon-7895f968f5-rgkz6" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.686929 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/35cf59e4-f205-4f9f-90ba-358c0fb38048-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"35cf59e4-f205-4f9f-90ba-358c0fb38048\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.686954 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e48968c0-ac21-49af-9161-19bf5e37c9eb-etc-machine-id\") pod \"cinder-db-sync-6vrvk\" (UID: \"e48968c0-ac21-49af-9161-19bf5e37c9eb\") " pod="openstack/cinder-db-sync-6vrvk" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.686971 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv972\" (UniqueName: \"kubernetes.io/projected/35cf59e4-f205-4f9f-90ba-358c0fb38048-kube-api-access-kv972\") pod \"watcher-decision-engine-0\" (UID: \"35cf59e4-f205-4f9f-90ba-358c0fb38048\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.686988 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/751b5593-10c9-46a0-bb4d-141ecbc13e10-combined-ca-bundle\") pod \"neutron-db-sync-fwcgf\" (UID: \"751b5593-10c9-46a0-bb4d-141ecbc13e10\") " pod="openstack/neutron-db-sync-fwcgf" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.687013 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e48968c0-ac21-49af-9161-19bf5e37c9eb-combined-ca-bundle\") pod \"cinder-db-sync-6vrvk\" (UID: \"e48968c0-ac21-49af-9161-19bf5e37c9eb\") " pod="openstack/cinder-db-sync-6vrvk" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.687039 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e48968c0-ac21-49af-9161-19bf5e37c9eb-config-data\") pod \"cinder-db-sync-6vrvk\" (UID: \"e48968c0-ac21-49af-9161-19bf5e37c9eb\") " pod="openstack/cinder-db-sync-6vrvk" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.687055 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35cf59e4-f205-4f9f-90ba-358c0fb38048-logs\") pod \"watcher-decision-engine-0\" (UID: \"35cf59e4-f205-4f9f-90ba-358c0fb38048\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.687488 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-l5ss6" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.688166 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.747903 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.788480 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e48968c0-ac21-49af-9161-19bf5e37c9eb-scripts\") pod \"cinder-db-sync-6vrvk\" (UID: \"e48968c0-ac21-49af-9161-19bf5e37c9eb\") " pod="openstack/cinder-db-sync-6vrvk" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.788523 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9btpv\" (UniqueName: \"kubernetes.io/projected/e48968c0-ac21-49af-9161-19bf5e37c9eb-kube-api-access-9btpv\") pod \"cinder-db-sync-6vrvk\" (UID: \"e48968c0-ac21-49af-9161-19bf5e37c9eb\") " pod="openstack/cinder-db-sync-6vrvk" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.788547 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35cf59e4-f205-4f9f-90ba-358c0fb38048-config-data\") pod \"watcher-decision-engine-0\" (UID: \"35cf59e4-f205-4f9f-90ba-358c0fb38048\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.788569 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsq9q\" (UniqueName: \"kubernetes.io/projected/751b5593-10c9-46a0-bb4d-141ecbc13e10-kube-api-access-vsq9q\") pod \"neutron-db-sync-fwcgf\" (UID: \"751b5593-10c9-46a0-bb4d-141ecbc13e10\") " pod="openstack/neutron-db-sync-fwcgf" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.788600 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29162ca8-7eeb-4727-bd25-1e129480c0a9-config-data\") pod \"horizon-7895f968f5-rgkz6\" (UID: \"29162ca8-7eeb-4727-bd25-1e129480c0a9\") " pod="openstack/horizon-7895f968f5-rgkz6" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.788624 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/751b5593-10c9-46a0-bb4d-141ecbc13e10-config\") pod \"neutron-db-sync-fwcgf\" (UID: \"751b5593-10c9-46a0-bb4d-141ecbc13e10\") " pod="openstack/neutron-db-sync-fwcgf" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.788647 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29162ca8-7eeb-4727-bd25-1e129480c0a9-scripts\") pod \"horizon-7895f968f5-rgkz6\" (UID: \"29162ca8-7eeb-4727-bd25-1e129480c0a9\") " pod="openstack/horizon-7895f968f5-rgkz6" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.788665 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35cf59e4-f205-4f9f-90ba-358c0fb38048-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"35cf59e4-f205-4f9f-90ba-358c0fb38048\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.788686 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e48968c0-ac21-49af-9161-19bf5e37c9eb-db-sync-config-data\") pod \"cinder-db-sync-6vrvk\" (UID: \"e48968c0-ac21-49af-9161-19bf5e37c9eb\") " pod="openstack/cinder-db-sync-6vrvk" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.788707 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pttm4\" (UniqueName: \"kubernetes.io/projected/29162ca8-7eeb-4727-bd25-1e129480c0a9-kube-api-access-pttm4\") pod \"horizon-7895f968f5-rgkz6\" (UID: \"29162ca8-7eeb-4727-bd25-1e129480c0a9\") " pod="openstack/horizon-7895f968f5-rgkz6" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.788724 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29162ca8-7eeb-4727-bd25-1e129480c0a9-logs\") pod \"horizon-7895f968f5-rgkz6\" (UID: \"29162ca8-7eeb-4727-bd25-1e129480c0a9\") " pod="openstack/horizon-7895f968f5-rgkz6" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.788744 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/29162ca8-7eeb-4727-bd25-1e129480c0a9-horizon-secret-key\") pod \"horizon-7895f968f5-rgkz6\" (UID: \"29162ca8-7eeb-4727-bd25-1e129480c0a9\") " pod="openstack/horizon-7895f968f5-rgkz6" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.788765 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/35cf59e4-f205-4f9f-90ba-358c0fb38048-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"35cf59e4-f205-4f9f-90ba-358c0fb38048\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.788793 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e48968c0-ac21-49af-9161-19bf5e37c9eb-etc-machine-id\") pod \"cinder-db-sync-6vrvk\" (UID: \"e48968c0-ac21-49af-9161-19bf5e37c9eb\") " pod="openstack/cinder-db-sync-6vrvk" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.788811 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv972\" (UniqueName: \"kubernetes.io/projected/35cf59e4-f205-4f9f-90ba-358c0fb38048-kube-api-access-kv972\") pod \"watcher-decision-engine-0\" (UID: \"35cf59e4-f205-4f9f-90ba-358c0fb38048\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.788830 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/751b5593-10c9-46a0-bb4d-141ecbc13e10-combined-ca-bundle\") pod \"neutron-db-sync-fwcgf\" (UID: \"751b5593-10c9-46a0-bb4d-141ecbc13e10\") " pod="openstack/neutron-db-sync-fwcgf" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.788871 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e48968c0-ac21-49af-9161-19bf5e37c9eb-combined-ca-bundle\") pod \"cinder-db-sync-6vrvk\" (UID: \"e48968c0-ac21-49af-9161-19bf5e37c9eb\") " pod="openstack/cinder-db-sync-6vrvk" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.788902 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e48968c0-ac21-49af-9161-19bf5e37c9eb-config-data\") pod \"cinder-db-sync-6vrvk\" (UID: \"e48968c0-ac21-49af-9161-19bf5e37c9eb\") " pod="openstack/cinder-db-sync-6vrvk" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.788921 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35cf59e4-f205-4f9f-90ba-358c0fb38048-logs\") pod \"watcher-decision-engine-0\" (UID: \"35cf59e4-f205-4f9f-90ba-358c0fb38048\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.789312 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35cf59e4-f205-4f9f-90ba-358c0fb38048-logs\") pod \"watcher-decision-engine-0\" (UID: \"35cf59e4-f205-4f9f-90ba-358c0fb38048\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.790161 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29162ca8-7eeb-4727-bd25-1e129480c0a9-config-data\") pod \"horizon-7895f968f5-rgkz6\" (UID: \"29162ca8-7eeb-4727-bd25-1e129480c0a9\") " pod="openstack/horizon-7895f968f5-rgkz6" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.795580 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e48968c0-ac21-49af-9161-19bf5e37c9eb-etc-machine-id\") pod \"cinder-db-sync-6vrvk\" (UID: \"e48968c0-ac21-49af-9161-19bf5e37c9eb\") " pod="openstack/cinder-db-sync-6vrvk" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.797664 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29162ca8-7eeb-4727-bd25-1e129480c0a9-logs\") pod \"horizon-7895f968f5-rgkz6\" (UID: \"29162ca8-7eeb-4727-bd25-1e129480c0a9\") " pod="openstack/horizon-7895f968f5-rgkz6" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.809568 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/35cf59e4-f205-4f9f-90ba-358c0fb38048-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"35cf59e4-f205-4f9f-90ba-358c0fb38048\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.809962 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e48968c0-ac21-49af-9161-19bf5e37c9eb-db-sync-config-data\") pod \"cinder-db-sync-6vrvk\" (UID: \"e48968c0-ac21-49af-9161-19bf5e37c9eb\") " pod="openstack/cinder-db-sync-6vrvk" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.810236 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29162ca8-7eeb-4727-bd25-1e129480c0a9-scripts\") pod \"horizon-7895f968f5-rgkz6\" (UID: \"29162ca8-7eeb-4727-bd25-1e129480c0a9\") " pod="openstack/horizon-7895f968f5-rgkz6" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.815638 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/751b5593-10c9-46a0-bb4d-141ecbc13e10-config\") pod \"neutron-db-sync-fwcgf\" (UID: \"751b5593-10c9-46a0-bb4d-141ecbc13e10\") " pod="openstack/neutron-db-sync-fwcgf" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.816123 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35cf59e4-f205-4f9f-90ba-358c0fb38048-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"35cf59e4-f205-4f9f-90ba-358c0fb38048\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.820116 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e48968c0-ac21-49af-9161-19bf5e37c9eb-combined-ca-bundle\") pod \"cinder-db-sync-6vrvk\" (UID: \"e48968c0-ac21-49af-9161-19bf5e37c9eb\") " pod="openstack/cinder-db-sync-6vrvk" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.820182 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e48968c0-ac21-49af-9161-19bf5e37c9eb-config-data\") pod \"cinder-db-sync-6vrvk\" (UID: \"e48968c0-ac21-49af-9161-19bf5e37c9eb\") " pod="openstack/cinder-db-sync-6vrvk" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.823920 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-fwcgf"] Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.824458 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/29162ca8-7eeb-4727-bd25-1e129480c0a9-horizon-secret-key\") pod \"horizon-7895f968f5-rgkz6\" (UID: \"29162ca8-7eeb-4727-bd25-1e129480c0a9\") " pod="openstack/horizon-7895f968f5-rgkz6" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.825596 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35cf59e4-f205-4f9f-90ba-358c0fb38048-config-data\") pod \"watcher-decision-engine-0\" (UID: \"35cf59e4-f205-4f9f-90ba-358c0fb38048\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.828437 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/751b5593-10c9-46a0-bb4d-141ecbc13e10-combined-ca-bundle\") pod \"neutron-db-sync-fwcgf\" (UID: \"751b5593-10c9-46a0-bb4d-141ecbc13e10\") " pod="openstack/neutron-db-sync-fwcgf" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.849526 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pttm4\" (UniqueName: \"kubernetes.io/projected/29162ca8-7eeb-4727-bd25-1e129480c0a9-kube-api-access-pttm4\") pod \"horizon-7895f968f5-rgkz6\" (UID: \"29162ca8-7eeb-4727-bd25-1e129480c0a9\") " pod="openstack/horizon-7895f968f5-rgkz6" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.853252 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e48968c0-ac21-49af-9161-19bf5e37c9eb-scripts\") pod \"cinder-db-sync-6vrvk\" (UID: \"e48968c0-ac21-49af-9161-19bf5e37c9eb\") " pod="openstack/cinder-db-sync-6vrvk" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.853488 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9btpv\" (UniqueName: \"kubernetes.io/projected/e48968c0-ac21-49af-9161-19bf5e37c9eb-kube-api-access-9btpv\") pod \"cinder-db-sync-6vrvk\" (UID: \"e48968c0-ac21-49af-9161-19bf5e37c9eb\") " pod="openstack/cinder-db-sync-6vrvk" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.874289 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.876780 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.885150 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.900148 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.905987 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.906460 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsq9q\" (UniqueName: \"kubernetes.io/projected/751b5593-10c9-46a0-bb4d-141ecbc13e10-kube-api-access-vsq9q\") pod \"neutron-db-sync-fwcgf\" (UID: \"751b5593-10c9-46a0-bb4d-141ecbc13e10\") " pod="openstack/neutron-db-sync-fwcgf" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.910480 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv972\" (UniqueName: \"kubernetes.io/projected/35cf59e4-f205-4f9f-90ba-358c0fb38048-kube-api-access-kv972\") pod \"watcher-decision-engine-0\" (UID: \"35cf59e4-f205-4f9f-90ba-358c0fb38048\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.950517 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7895f968f5-rgkz6" Jan 22 10:44:00 crc kubenswrapper[4752]: I0122 10:44:00.981705 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.014830 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd370da5-83df-42ba-a822-7cff763d174b-config-data\") pod \"ceilometer-0\" (UID: \"fd370da5-83df-42ba-a822-7cff763d174b\") " pod="openstack/ceilometer-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.014980 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd370da5-83df-42ba-a822-7cff763d174b-run-httpd\") pod \"ceilometer-0\" (UID: \"fd370da5-83df-42ba-a822-7cff763d174b\") " pod="openstack/ceilometer-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.015047 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd370da5-83df-42ba-a822-7cff763d174b-scripts\") pod \"ceilometer-0\" (UID: \"fd370da5-83df-42ba-a822-7cff763d174b\") " pod="openstack/ceilometer-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.015105 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd370da5-83df-42ba-a822-7cff763d174b-log-httpd\") pod \"ceilometer-0\" (UID: \"fd370da5-83df-42ba-a822-7cff763d174b\") " pod="openstack/ceilometer-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.015135 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tqn6\" (UniqueName: \"kubernetes.io/projected/fd370da5-83df-42ba-a822-7cff763d174b-kube-api-access-4tqn6\") pod \"ceilometer-0\" (UID: \"fd370da5-83df-42ba-a822-7cff763d174b\") " pod="openstack/ceilometer-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.015152 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd370da5-83df-42ba-a822-7cff763d174b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd370da5-83df-42ba-a822-7cff763d174b\") " pod="openstack/ceilometer-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.015293 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd370da5-83df-42ba-a822-7cff763d174b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd370da5-83df-42ba-a822-7cff763d174b\") " pod="openstack/ceilometer-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.113320 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6vrvk" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.115156 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fwcgf" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.122517 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd370da5-83df-42ba-a822-7cff763d174b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd370da5-83df-42ba-a822-7cff763d174b\") " pod="openstack/ceilometer-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.122761 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd370da5-83df-42ba-a822-7cff763d174b-config-data\") pod \"ceilometer-0\" (UID: \"fd370da5-83df-42ba-a822-7cff763d174b\") " pod="openstack/ceilometer-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.122976 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd370da5-83df-42ba-a822-7cff763d174b-run-httpd\") pod \"ceilometer-0\" (UID: \"fd370da5-83df-42ba-a822-7cff763d174b\") " pod="openstack/ceilometer-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.123118 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd370da5-83df-42ba-a822-7cff763d174b-scripts\") pod \"ceilometer-0\" (UID: \"fd370da5-83df-42ba-a822-7cff763d174b\") " pod="openstack/ceilometer-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.123302 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd370da5-83df-42ba-a822-7cff763d174b-log-httpd\") pod \"ceilometer-0\" (UID: \"fd370da5-83df-42ba-a822-7cff763d174b\") " pod="openstack/ceilometer-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.123417 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd370da5-83df-42ba-a822-7cff763d174b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd370da5-83df-42ba-a822-7cff763d174b\") " pod="openstack/ceilometer-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.123496 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tqn6\" (UniqueName: \"kubernetes.io/projected/fd370da5-83df-42ba-a822-7cff763d174b-kube-api-access-4tqn6\") pod \"ceilometer-0\" (UID: \"fd370da5-83df-42ba-a822-7cff763d174b\") " pod="openstack/ceilometer-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.125631 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd370da5-83df-42ba-a822-7cff763d174b-run-httpd\") pod \"ceilometer-0\" (UID: \"fd370da5-83df-42ba-a822-7cff763d174b\") " pod="openstack/ceilometer-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.130604 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd370da5-83df-42ba-a822-7cff763d174b-log-httpd\") pod \"ceilometer-0\" (UID: \"fd370da5-83df-42ba-a822-7cff763d174b\") " pod="openstack/ceilometer-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.136780 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd370da5-83df-42ba-a822-7cff763d174b-config-data\") pod \"ceilometer-0\" (UID: \"fd370da5-83df-42ba-a822-7cff763d174b\") " pod="openstack/ceilometer-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.156175 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd370da5-83df-42ba-a822-7cff763d174b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd370da5-83df-42ba-a822-7cff763d174b\") " pod="openstack/ceilometer-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.157354 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-bznl9"] Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.158380 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bznl9" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.158676 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd370da5-83df-42ba-a822-7cff763d174b-scripts\") pod \"ceilometer-0\" (UID: \"fd370da5-83df-42ba-a822-7cff763d174b\") " pod="openstack/ceilometer-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.162279 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd370da5-83df-42ba-a822-7cff763d174b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd370da5-83df-42ba-a822-7cff763d174b\") " pod="openstack/ceilometer-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.168689 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tqn6\" (UniqueName: \"kubernetes.io/projected/fd370da5-83df-42ba-a822-7cff763d174b-kube-api-access-4tqn6\") pod \"ceilometer-0\" (UID: \"fd370da5-83df-42ba-a822-7cff763d174b\") " pod="openstack/ceilometer-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.168753 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.169078 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-tldwq" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.169546 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-bznl9"] Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.175648 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6864464dd5-tktm8"] Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.178144 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6864464dd5-tktm8" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.214724 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ff6fd6c5-fl48d"] Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.267717 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6864464dd5-tktm8"] Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.304686 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-qht9v"] Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.306575 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qht9v" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.308617 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-9jsbv" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.309742 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.309951 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.315242 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.316453 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.317929 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.320133 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.329260 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-qht9v"] Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.330498 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28a90e4a-ca62-4bd6-bfee-29cfda8b7478-config-data\") pod \"horizon-6864464dd5-tktm8\" (UID: \"28a90e4a-ca62-4bd6-bfee-29cfda8b7478\") " pod="openstack/horizon-6864464dd5-tktm8" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.330572 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/28a90e4a-ca62-4bd6-bfee-29cfda8b7478-horizon-secret-key\") pod \"horizon-6864464dd5-tktm8\" (UID: \"28a90e4a-ca62-4bd6-bfee-29cfda8b7478\") " pod="openstack/horizon-6864464dd5-tktm8" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.330609 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk6jd\" (UniqueName: \"kubernetes.io/projected/a8ef3108-d8e0-424d-be70-8bcab25d2c0b-kube-api-access-jk6jd\") pod \"barbican-db-sync-bznl9\" (UID: \"a8ef3108-d8e0-424d-be70-8bcab25d2c0b\") " pod="openstack/barbican-db-sync-bznl9" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.330641 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28a90e4a-ca62-4bd6-bfee-29cfda8b7478-logs\") pod \"horizon-6864464dd5-tktm8\" (UID: \"28a90e4a-ca62-4bd6-bfee-29cfda8b7478\") " pod="openstack/horizon-6864464dd5-tktm8" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.330668 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28a90e4a-ca62-4bd6-bfee-29cfda8b7478-scripts\") pod \"horizon-6864464dd5-tktm8\" (UID: \"28a90e4a-ca62-4bd6-bfee-29cfda8b7478\") " pod="openstack/horizon-6864464dd5-tktm8" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.330774 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdbvx\" (UniqueName: \"kubernetes.io/projected/28a90e4a-ca62-4bd6-bfee-29cfda8b7478-kube-api-access-cdbvx\") pod \"horizon-6864464dd5-tktm8\" (UID: \"28a90e4a-ca62-4bd6-bfee-29cfda8b7478\") " pod="openstack/horizon-6864464dd5-tktm8" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.330909 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8ef3108-d8e0-424d-be70-8bcab25d2c0b-combined-ca-bundle\") pod \"barbican-db-sync-bznl9\" (UID: \"a8ef3108-d8e0-424d-be70-8bcab25d2c0b\") " pod="openstack/barbican-db-sync-bznl9" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.331001 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a8ef3108-d8e0-424d-be70-8bcab25d2c0b-db-sync-config-data\") pod \"barbican-db-sync-bznl9\" (UID: \"a8ef3108-d8e0-424d-be70-8bcab25d2c0b\") " pod="openstack/barbican-db-sync-bznl9" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.337479 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.349530 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dd959b98c-qv4gw"] Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.351879 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dd959b98c-qv4gw" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.368998 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dd959b98c-qv4gw"] Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.432407 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv7bz\" (UniqueName: \"kubernetes.io/projected/d6b5da9f-e5d4-4629-89a9-1d215475d3bb-kube-api-access-kv7bz\") pod \"placement-db-sync-qht9v\" (UID: \"d6b5da9f-e5d4-4629-89a9-1d215475d3bb\") " pod="openstack/placement-db-sync-qht9v" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.432551 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8ef3108-d8e0-424d-be70-8bcab25d2c0b-combined-ca-bundle\") pod \"barbican-db-sync-bznl9\" (UID: \"a8ef3108-d8e0-424d-be70-8bcab25d2c0b\") " pod="openstack/barbican-db-sync-bznl9" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.432639 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6b5da9f-e5d4-4629-89a9-1d215475d3bb-combined-ca-bundle\") pod \"placement-db-sync-qht9v\" (UID: \"d6b5da9f-e5d4-4629-89a9-1d215475d3bb\") " pod="openstack/placement-db-sync-qht9v" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.432707 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6b5da9f-e5d4-4629-89a9-1d215475d3bb-logs\") pod \"placement-db-sync-qht9v\" (UID: \"d6b5da9f-e5d4-4629-89a9-1d215475d3bb\") " pod="openstack/placement-db-sync-qht9v" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.432785 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a8ef3108-d8e0-424d-be70-8bcab25d2c0b-db-sync-config-data\") pod \"barbican-db-sync-bznl9\" (UID: \"a8ef3108-d8e0-424d-be70-8bcab25d2c0b\") " pod="openstack/barbican-db-sync-bznl9" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.432833 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c65577-767b-4bda-b56d-ac570e6cdbcf-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"b0c65577-767b-4bda-b56d-ac570e6cdbcf\") " pod="openstack/watcher-api-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.432954 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0c65577-767b-4bda-b56d-ac570e6cdbcf-logs\") pod \"watcher-api-0\" (UID: \"b0c65577-767b-4bda-b56d-ac570e6cdbcf\") " pod="openstack/watcher-api-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.432977 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6b5da9f-e5d4-4629-89a9-1d215475d3bb-config-data\") pod \"placement-db-sync-qht9v\" (UID: \"d6b5da9f-e5d4-4629-89a9-1d215475d3bb\") " pod="openstack/placement-db-sync-qht9v" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.433099 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28a90e4a-ca62-4bd6-bfee-29cfda8b7478-config-data\") pod \"horizon-6864464dd5-tktm8\" (UID: \"28a90e4a-ca62-4bd6-bfee-29cfda8b7478\") " pod="openstack/horizon-6864464dd5-tktm8" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.433170 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz7zg\" (UniqueName: \"kubernetes.io/projected/b0c65577-767b-4bda-b56d-ac570e6cdbcf-kube-api-access-sz7zg\") pod \"watcher-api-0\" (UID: \"b0c65577-767b-4bda-b56d-ac570e6cdbcf\") " pod="openstack/watcher-api-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.433230 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/28a90e4a-ca62-4bd6-bfee-29cfda8b7478-horizon-secret-key\") pod \"horizon-6864464dd5-tktm8\" (UID: \"28a90e4a-ca62-4bd6-bfee-29cfda8b7478\") " pod="openstack/horizon-6864464dd5-tktm8" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.433244 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0c65577-767b-4bda-b56d-ac570e6cdbcf-config-data\") pod \"watcher-api-0\" (UID: \"b0c65577-767b-4bda-b56d-ac570e6cdbcf\") " pod="openstack/watcher-api-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.433326 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk6jd\" (UniqueName: \"kubernetes.io/projected/a8ef3108-d8e0-424d-be70-8bcab25d2c0b-kube-api-access-jk6jd\") pod \"barbican-db-sync-bznl9\" (UID: \"a8ef3108-d8e0-424d-be70-8bcab25d2c0b\") " pod="openstack/barbican-db-sync-bznl9" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.433385 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28a90e4a-ca62-4bd6-bfee-29cfda8b7478-logs\") pod \"horizon-6864464dd5-tktm8\" (UID: \"28a90e4a-ca62-4bd6-bfee-29cfda8b7478\") " pod="openstack/horizon-6864464dd5-tktm8" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.433410 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6b5da9f-e5d4-4629-89a9-1d215475d3bb-scripts\") pod \"placement-db-sync-qht9v\" (UID: \"d6b5da9f-e5d4-4629-89a9-1d215475d3bb\") " pod="openstack/placement-db-sync-qht9v" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.433436 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28a90e4a-ca62-4bd6-bfee-29cfda8b7478-scripts\") pod \"horizon-6864464dd5-tktm8\" (UID: \"28a90e4a-ca62-4bd6-bfee-29cfda8b7478\") " pod="openstack/horizon-6864464dd5-tktm8" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.433462 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b0c65577-767b-4bda-b56d-ac570e6cdbcf-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"b0c65577-767b-4bda-b56d-ac570e6cdbcf\") " pod="openstack/watcher-api-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.433491 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdbvx\" (UniqueName: \"kubernetes.io/projected/28a90e4a-ca62-4bd6-bfee-29cfda8b7478-kube-api-access-cdbvx\") pod \"horizon-6864464dd5-tktm8\" (UID: \"28a90e4a-ca62-4bd6-bfee-29cfda8b7478\") " pod="openstack/horizon-6864464dd5-tktm8" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.435167 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28a90e4a-ca62-4bd6-bfee-29cfda8b7478-logs\") pod \"horizon-6864464dd5-tktm8\" (UID: \"28a90e4a-ca62-4bd6-bfee-29cfda8b7478\") " pod="openstack/horizon-6864464dd5-tktm8" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.436083 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28a90e4a-ca62-4bd6-bfee-29cfda8b7478-scripts\") pod \"horizon-6864464dd5-tktm8\" (UID: \"28a90e4a-ca62-4bd6-bfee-29cfda8b7478\") " pod="openstack/horizon-6864464dd5-tktm8" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.449832 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28a90e4a-ca62-4bd6-bfee-29cfda8b7478-config-data\") pod \"horizon-6864464dd5-tktm8\" (UID: \"28a90e4a-ca62-4bd6-bfee-29cfda8b7478\") " pod="openstack/horizon-6864464dd5-tktm8" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.458020 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8ef3108-d8e0-424d-be70-8bcab25d2c0b-combined-ca-bundle\") pod \"barbican-db-sync-bznl9\" (UID: \"a8ef3108-d8e0-424d-be70-8bcab25d2c0b\") " pod="openstack/barbican-db-sync-bznl9" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.458420 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a8ef3108-d8e0-424d-be70-8bcab25d2c0b-db-sync-config-data\") pod \"barbican-db-sync-bznl9\" (UID: \"a8ef3108-d8e0-424d-be70-8bcab25d2c0b\") " pod="openstack/barbican-db-sync-bznl9" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.468837 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/28a90e4a-ca62-4bd6-bfee-29cfda8b7478-horizon-secret-key\") pod \"horizon-6864464dd5-tktm8\" (UID: \"28a90e4a-ca62-4bd6-bfee-29cfda8b7478\") " pod="openstack/horizon-6864464dd5-tktm8" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.471063 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdbvx\" (UniqueName: \"kubernetes.io/projected/28a90e4a-ca62-4bd6-bfee-29cfda8b7478-kube-api-access-cdbvx\") pod \"horizon-6864464dd5-tktm8\" (UID: \"28a90e4a-ca62-4bd6-bfee-29cfda8b7478\") " pod="openstack/horizon-6864464dd5-tktm8" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.486364 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ff6fd6c5-fl48d"] Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.493308 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk6jd\" (UniqueName: \"kubernetes.io/projected/a8ef3108-d8e0-424d-be70-8bcab25d2c0b-kube-api-access-jk6jd\") pod \"barbican-db-sync-bznl9\" (UID: \"a8ef3108-d8e0-424d-be70-8bcab25d2c0b\") " pod="openstack/barbican-db-sync-bznl9" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.534647 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c65577-767b-4bda-b56d-ac570e6cdbcf-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"b0c65577-767b-4bda-b56d-ac570e6cdbcf\") " pod="openstack/watcher-api-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.534682 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0c65577-767b-4bda-b56d-ac570e6cdbcf-logs\") pod \"watcher-api-0\" (UID: \"b0c65577-767b-4bda-b56d-ac570e6cdbcf\") " pod="openstack/watcher-api-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.534701 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6b5da9f-e5d4-4629-89a9-1d215475d3bb-config-data\") pod \"placement-db-sync-qht9v\" (UID: \"d6b5da9f-e5d4-4629-89a9-1d215475d3bb\") " pod="openstack/placement-db-sync-qht9v" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.534730 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b294f47a-d420-4cee-b974-315f75bb89d5-ovsdbserver-sb\") pod \"dnsmasq-dns-dd959b98c-qv4gw\" (UID: \"b294f47a-d420-4cee-b974-315f75bb89d5\") " pod="openstack/dnsmasq-dns-dd959b98c-qv4gw" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.534759 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz7zg\" (UniqueName: \"kubernetes.io/projected/b0c65577-767b-4bda-b56d-ac570e6cdbcf-kube-api-access-sz7zg\") pod \"watcher-api-0\" (UID: \"b0c65577-767b-4bda-b56d-ac570e6cdbcf\") " pod="openstack/watcher-api-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.534781 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0c65577-767b-4bda-b56d-ac570e6cdbcf-config-data\") pod \"watcher-api-0\" (UID: \"b0c65577-767b-4bda-b56d-ac570e6cdbcf\") " pod="openstack/watcher-api-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.534825 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b294f47a-d420-4cee-b974-315f75bb89d5-ovsdbserver-nb\") pod \"dnsmasq-dns-dd959b98c-qv4gw\" (UID: \"b294f47a-d420-4cee-b974-315f75bb89d5\") " pod="openstack/dnsmasq-dns-dd959b98c-qv4gw" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.534867 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6b5da9f-e5d4-4629-89a9-1d215475d3bb-scripts\") pod \"placement-db-sync-qht9v\" (UID: \"d6b5da9f-e5d4-4629-89a9-1d215475d3bb\") " pod="openstack/placement-db-sync-qht9v" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.534898 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b294f47a-d420-4cee-b974-315f75bb89d5-config\") pod \"dnsmasq-dns-dd959b98c-qv4gw\" (UID: \"b294f47a-d420-4cee-b974-315f75bb89d5\") " pod="openstack/dnsmasq-dns-dd959b98c-qv4gw" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.534917 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b0c65577-767b-4bda-b56d-ac570e6cdbcf-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"b0c65577-767b-4bda-b56d-ac570e6cdbcf\") " pod="openstack/watcher-api-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.534936 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zkh8\" (UniqueName: \"kubernetes.io/projected/b294f47a-d420-4cee-b974-315f75bb89d5-kube-api-access-9zkh8\") pod \"dnsmasq-dns-dd959b98c-qv4gw\" (UID: \"b294f47a-d420-4cee-b974-315f75bb89d5\") " pod="openstack/dnsmasq-dns-dd959b98c-qv4gw" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.534958 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b294f47a-d420-4cee-b974-315f75bb89d5-dns-swift-storage-0\") pod \"dnsmasq-dns-dd959b98c-qv4gw\" (UID: \"b294f47a-d420-4cee-b974-315f75bb89d5\") " pod="openstack/dnsmasq-dns-dd959b98c-qv4gw" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.534973 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b294f47a-d420-4cee-b974-315f75bb89d5-dns-svc\") pod \"dnsmasq-dns-dd959b98c-qv4gw\" (UID: \"b294f47a-d420-4cee-b974-315f75bb89d5\") " pod="openstack/dnsmasq-dns-dd959b98c-qv4gw" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.534995 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv7bz\" (UniqueName: \"kubernetes.io/projected/d6b5da9f-e5d4-4629-89a9-1d215475d3bb-kube-api-access-kv7bz\") pod \"placement-db-sync-qht9v\" (UID: \"d6b5da9f-e5d4-4629-89a9-1d215475d3bb\") " pod="openstack/placement-db-sync-qht9v" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.535030 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6b5da9f-e5d4-4629-89a9-1d215475d3bb-combined-ca-bundle\") pod \"placement-db-sync-qht9v\" (UID: \"d6b5da9f-e5d4-4629-89a9-1d215475d3bb\") " pod="openstack/placement-db-sync-qht9v" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.535047 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6b5da9f-e5d4-4629-89a9-1d215475d3bb-logs\") pod \"placement-db-sync-qht9v\" (UID: \"d6b5da9f-e5d4-4629-89a9-1d215475d3bb\") " pod="openstack/placement-db-sync-qht9v" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.535434 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6b5da9f-e5d4-4629-89a9-1d215475d3bb-logs\") pod \"placement-db-sync-qht9v\" (UID: \"d6b5da9f-e5d4-4629-89a9-1d215475d3bb\") " pod="openstack/placement-db-sync-qht9v" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.536322 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0c65577-767b-4bda-b56d-ac570e6cdbcf-logs\") pod \"watcher-api-0\" (UID: \"b0c65577-767b-4bda-b56d-ac570e6cdbcf\") " pod="openstack/watcher-api-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.547386 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6b5da9f-e5d4-4629-89a9-1d215475d3bb-config-data\") pod \"placement-db-sync-qht9v\" (UID: \"d6b5da9f-e5d4-4629-89a9-1d215475d3bb\") " pod="openstack/placement-db-sync-qht9v" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.551415 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b0c65577-767b-4bda-b56d-ac570e6cdbcf-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"b0c65577-767b-4bda-b56d-ac570e6cdbcf\") " pod="openstack/watcher-api-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.565958 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6b5da9f-e5d4-4629-89a9-1d215475d3bb-combined-ca-bundle\") pod \"placement-db-sync-qht9v\" (UID: \"d6b5da9f-e5d4-4629-89a9-1d215475d3bb\") " pod="openstack/placement-db-sync-qht9v" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.566171 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0c65577-767b-4bda-b56d-ac570e6cdbcf-config-data\") pod \"watcher-api-0\" (UID: \"b0c65577-767b-4bda-b56d-ac570e6cdbcf\") " pod="openstack/watcher-api-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.566667 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c65577-767b-4bda-b56d-ac570e6cdbcf-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"b0c65577-767b-4bda-b56d-ac570e6cdbcf\") " pod="openstack/watcher-api-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.568137 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6b5da9f-e5d4-4629-89a9-1d215475d3bb-scripts\") pod \"placement-db-sync-qht9v\" (UID: \"d6b5da9f-e5d4-4629-89a9-1d215475d3bb\") " pod="openstack/placement-db-sync-qht9v" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.576584 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv7bz\" (UniqueName: \"kubernetes.io/projected/d6b5da9f-e5d4-4629-89a9-1d215475d3bb-kube-api-access-kv7bz\") pod \"placement-db-sync-qht9v\" (UID: \"d6b5da9f-e5d4-4629-89a9-1d215475d3bb\") " pod="openstack/placement-db-sync-qht9v" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.587420 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz7zg\" (UniqueName: \"kubernetes.io/projected/b0c65577-767b-4bda-b56d-ac570e6cdbcf-kube-api-access-sz7zg\") pod \"watcher-api-0\" (UID: \"b0c65577-767b-4bda-b56d-ac570e6cdbcf\") " pod="openstack/watcher-api-0" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.636586 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b294f47a-d420-4cee-b974-315f75bb89d5-ovsdbserver-sb\") pod \"dnsmasq-dns-dd959b98c-qv4gw\" (UID: \"b294f47a-d420-4cee-b974-315f75bb89d5\") " pod="openstack/dnsmasq-dns-dd959b98c-qv4gw" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.636691 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b294f47a-d420-4cee-b974-315f75bb89d5-ovsdbserver-nb\") pod \"dnsmasq-dns-dd959b98c-qv4gw\" (UID: \"b294f47a-d420-4cee-b974-315f75bb89d5\") " pod="openstack/dnsmasq-dns-dd959b98c-qv4gw" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.636739 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b294f47a-d420-4cee-b974-315f75bb89d5-config\") pod \"dnsmasq-dns-dd959b98c-qv4gw\" (UID: \"b294f47a-d420-4cee-b974-315f75bb89d5\") " pod="openstack/dnsmasq-dns-dd959b98c-qv4gw" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.636768 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zkh8\" (UniqueName: \"kubernetes.io/projected/b294f47a-d420-4cee-b974-315f75bb89d5-kube-api-access-9zkh8\") pod \"dnsmasq-dns-dd959b98c-qv4gw\" (UID: \"b294f47a-d420-4cee-b974-315f75bb89d5\") " pod="openstack/dnsmasq-dns-dd959b98c-qv4gw" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.636799 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b294f47a-d420-4cee-b974-315f75bb89d5-dns-swift-storage-0\") pod \"dnsmasq-dns-dd959b98c-qv4gw\" (UID: \"b294f47a-d420-4cee-b974-315f75bb89d5\") " pod="openstack/dnsmasq-dns-dd959b98c-qv4gw" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.636821 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b294f47a-d420-4cee-b974-315f75bb89d5-dns-svc\") pod \"dnsmasq-dns-dd959b98c-qv4gw\" (UID: \"b294f47a-d420-4cee-b974-315f75bb89d5\") " pod="openstack/dnsmasq-dns-dd959b98c-qv4gw" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.638601 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b294f47a-d420-4cee-b974-315f75bb89d5-ovsdbserver-nb\") pod \"dnsmasq-dns-dd959b98c-qv4gw\" (UID: \"b294f47a-d420-4cee-b974-315f75bb89d5\") " pod="openstack/dnsmasq-dns-dd959b98c-qv4gw" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.638783 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b294f47a-d420-4cee-b974-315f75bb89d5-dns-swift-storage-0\") pod \"dnsmasq-dns-dd959b98c-qv4gw\" (UID: \"b294f47a-d420-4cee-b974-315f75bb89d5\") " pod="openstack/dnsmasq-dns-dd959b98c-qv4gw" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.639145 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b294f47a-d420-4cee-b974-315f75bb89d5-ovsdbserver-sb\") pod \"dnsmasq-dns-dd959b98c-qv4gw\" (UID: \"b294f47a-d420-4cee-b974-315f75bb89d5\") " pod="openstack/dnsmasq-dns-dd959b98c-qv4gw" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.639342 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b294f47a-d420-4cee-b974-315f75bb89d5-config\") pod \"dnsmasq-dns-dd959b98c-qv4gw\" (UID: \"b294f47a-d420-4cee-b974-315f75bb89d5\") " pod="openstack/dnsmasq-dns-dd959b98c-qv4gw" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.640961 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b294f47a-d420-4cee-b974-315f75bb89d5-dns-svc\") pod \"dnsmasq-dns-dd959b98c-qv4gw\" (UID: \"b294f47a-d420-4cee-b974-315f75bb89d5\") " pod="openstack/dnsmasq-dns-dd959b98c-qv4gw" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.656499 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zkh8\" (UniqueName: \"kubernetes.io/projected/b294f47a-d420-4cee-b974-315f75bb89d5-kube-api-access-9zkh8\") pod \"dnsmasq-dns-dd959b98c-qv4gw\" (UID: \"b294f47a-d420-4cee-b974-315f75bb89d5\") " pod="openstack/dnsmasq-dns-dd959b98c-qv4gw" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.719791 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6864464dd5-tktm8" Jan 22 10:44:01 crc kubenswrapper[4752]: I0122 10:44:01.720377 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bznl9" Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:01.779254 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qht9v" Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:01.821184 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:01.840424 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dd959b98c-qv4gw" Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:01.860319 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:01.909149 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4nwnj"] Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:02.012767 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ff6fd6c5-fl48d" event={"ID":"38fb4345-e766-46b0-85ee-f05095a67208","Type":"ContainerStarted","Data":"90b49827746d182fcb070f3c4c4721abd02819d71785547775c23a727a855529"} Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:02.018909 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"03c5e4ed-4c27-447e-b8ea-8853f84742e3","Type":"ContainerStarted","Data":"2d087d4062a1128a9ebb235e63ad91a5abe0a88ce8de069de367fd2bfad99f0d"} Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:02.020348 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4nwnj" event={"ID":"f7c9f151-300c-48c7-b137-6a870ca61b60","Type":"ContainerStarted","Data":"c05771f71e3888abfc9d670f67fe71ecc02c313658d20d3b6f709e20eca28455"} Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:02.305804 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:02.837149 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:02.857450 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6864464dd5-tktm8"] Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:02.893528 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-57957469d5-fx7bl"] Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:02.894926 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57957469d5-fx7bl" Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:02.908458 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-57957469d5-fx7bl"] Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:02.962648 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3facab56-48f5-4f06-b879-86a9fb933537-config-data\") pod \"horizon-57957469d5-fx7bl\" (UID: \"3facab56-48f5-4f06-b879-86a9fb933537\") " pod="openstack/horizon-57957469d5-fx7bl" Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:02.962717 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3facab56-48f5-4f06-b879-86a9fb933537-logs\") pod \"horizon-57957469d5-fx7bl\" (UID: \"3facab56-48f5-4f06-b879-86a9fb933537\") " pod="openstack/horizon-57957469d5-fx7bl" Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:02.962882 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3facab56-48f5-4f06-b879-86a9fb933537-scripts\") pod \"horizon-57957469d5-fx7bl\" (UID: \"3facab56-48f5-4f06-b879-86a9fb933537\") " pod="openstack/horizon-57957469d5-fx7bl" Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:02.962972 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3facab56-48f5-4f06-b879-86a9fb933537-horizon-secret-key\") pod \"horizon-57957469d5-fx7bl\" (UID: \"3facab56-48f5-4f06-b879-86a9fb933537\") " pod="openstack/horizon-57957469d5-fx7bl" Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:02.963102 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th7zq\" (UniqueName: \"kubernetes.io/projected/3facab56-48f5-4f06-b879-86a9fb933537-kube-api-access-th7zq\") pod \"horizon-57957469d5-fx7bl\" (UID: \"3facab56-48f5-4f06-b879-86a9fb933537\") " pod="openstack/horizon-57957469d5-fx7bl" Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:03.064825 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3facab56-48f5-4f06-b879-86a9fb933537-config-data\") pod \"horizon-57957469d5-fx7bl\" (UID: \"3facab56-48f5-4f06-b879-86a9fb933537\") " pod="openstack/horizon-57957469d5-fx7bl" Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:03.064953 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3facab56-48f5-4f06-b879-86a9fb933537-logs\") pod \"horizon-57957469d5-fx7bl\" (UID: \"3facab56-48f5-4f06-b879-86a9fb933537\") " pod="openstack/horizon-57957469d5-fx7bl" Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:03.065011 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3facab56-48f5-4f06-b879-86a9fb933537-scripts\") pod \"horizon-57957469d5-fx7bl\" (UID: \"3facab56-48f5-4f06-b879-86a9fb933537\") " pod="openstack/horizon-57957469d5-fx7bl" Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:03.065051 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3facab56-48f5-4f06-b879-86a9fb933537-horizon-secret-key\") pod \"horizon-57957469d5-fx7bl\" (UID: \"3facab56-48f5-4f06-b879-86a9fb933537\") " pod="openstack/horizon-57957469d5-fx7bl" Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:03.065119 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th7zq\" (UniqueName: \"kubernetes.io/projected/3facab56-48f5-4f06-b879-86a9fb933537-kube-api-access-th7zq\") pod \"horizon-57957469d5-fx7bl\" (UID: \"3facab56-48f5-4f06-b879-86a9fb933537\") " pod="openstack/horizon-57957469d5-fx7bl" Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:03.065491 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3facab56-48f5-4f06-b879-86a9fb933537-logs\") pod \"horizon-57957469d5-fx7bl\" (UID: \"3facab56-48f5-4f06-b879-86a9fb933537\") " pod="openstack/horizon-57957469d5-fx7bl" Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:03.065891 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3facab56-48f5-4f06-b879-86a9fb933537-scripts\") pod \"horizon-57957469d5-fx7bl\" (UID: \"3facab56-48f5-4f06-b879-86a9fb933537\") " pod="openstack/horizon-57957469d5-fx7bl" Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:03.066268 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3facab56-48f5-4f06-b879-86a9fb933537-config-data\") pod \"horizon-57957469d5-fx7bl\" (UID: \"3facab56-48f5-4f06-b879-86a9fb933537\") " pod="openstack/horizon-57957469d5-fx7bl" Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:03.077539 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3facab56-48f5-4f06-b879-86a9fb933537-horizon-secret-key\") pod \"horizon-57957469d5-fx7bl\" (UID: \"3facab56-48f5-4f06-b879-86a9fb933537\") " pod="openstack/horizon-57957469d5-fx7bl" Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:03.098194 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th7zq\" (UniqueName: \"kubernetes.io/projected/3facab56-48f5-4f06-b879-86a9fb933537-kube-api-access-th7zq\") pod \"horizon-57957469d5-fx7bl\" (UID: \"3facab56-48f5-4f06-b879-86a9fb933537\") " pod="openstack/horizon-57957469d5-fx7bl" Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:03.211665 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57957469d5-fx7bl" Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:04.041995 4752 generic.go:334] "Generic (PLEG): container finished" podID="38fb4345-e766-46b0-85ee-f05095a67208" containerID="e4a5726298ade89baaac22c39ada2f2725c25d1a4a5745fbe544fd8f2ed20f6e" exitCode=0 Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:04.042051 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ff6fd6c5-fl48d" event={"ID":"38fb4345-e766-46b0-85ee-f05095a67208","Type":"ContainerDied","Data":"e4a5726298ade89baaac22c39ada2f2725c25d1a4a5745fbe544fd8f2ed20f6e"} Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:04.044320 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4nwnj" event={"ID":"f7c9f151-300c-48c7-b137-6a870ca61b60","Type":"ContainerStarted","Data":"b613bfe657cd8197b9b8f3a2c193072e9763c90ae4811d451e1e726868704b5d"} Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:04.085723 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-4nwnj" podStartSLOduration=4.085709051 podStartE2EDuration="4.085709051s" podCreationTimestamp="2026-01-22 10:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:44:04.082974353 +0000 UTC m=+1123.312917261" watchObservedRunningTime="2026-01-22 10:44:04.085709051 +0000 UTC m=+1123.315651959" Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:04.745276 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:04.781223 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7895f968f5-rgkz6"] Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:04.808462 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:04.928642 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-fwcgf"] Jan 22 10:44:04 crc kubenswrapper[4752]: I0122 10:44:04.982505 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:44:05 crc kubenswrapper[4752]: I0122 10:44:05.011134 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-6vrvk"] Jan 22 10:44:05 crc kubenswrapper[4752]: I0122 10:44:05.145343 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-qht9v"] Jan 22 10:44:05 crc kubenswrapper[4752]: I0122 10:44:05.160748 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-bznl9"] Jan 22 10:44:05 crc kubenswrapper[4752]: I0122 10:44:05.180160 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6864464dd5-tktm8"] Jan 22 10:44:05 crc kubenswrapper[4752]: I0122 10:44:05.188878 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 22 10:44:05 crc kubenswrapper[4752]: I0122 10:44:05.200668 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dd959b98c-qv4gw"] Jan 22 10:44:05 crc kubenswrapper[4752]: I0122 10:44:05.258552 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-57957469d5-fx7bl"] Jan 22 10:44:06 crc kubenswrapper[4752]: W0122 10:44:06.570478 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb294f47a_d420_4cee_b974_315f75bb89d5.slice/crio-7aac070cb790d73fedcb3dada76fe482e922553e20527b4e24cd87f1d0971b84 WatchSource:0}: Error finding container 7aac070cb790d73fedcb3dada76fe482e922553e20527b4e24cd87f1d0971b84: Status 404 returned error can't find the container with id 7aac070cb790d73fedcb3dada76fe482e922553e20527b4e24cd87f1d0971b84 Jan 22 10:44:06 crc kubenswrapper[4752]: W0122 10:44:06.573185 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3facab56_48f5_4f06_b879_86a9fb933537.slice/crio-a999a074d6997a7d85ee6440101bb5c7f0e23d0da3d1b5ab87776c41e034ba80 WatchSource:0}: Error finding container a999a074d6997a7d85ee6440101bb5c7f0e23d0da3d1b5ab87776c41e034ba80: Status 404 returned error can't find the container with id a999a074d6997a7d85ee6440101bb5c7f0e23d0da3d1b5ab87776c41e034ba80 Jan 22 10:44:06 crc kubenswrapper[4752]: W0122 10:44:06.583786 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6b5da9f_e5d4_4629_89a9_1d215475d3bb.slice/crio-cdd4d56e17e12f2a1183e0fc172de80c3c9296c80dac10c351a1363f44b5a8da WatchSource:0}: Error finding container cdd4d56e17e12f2a1183e0fc172de80c3c9296c80dac10c351a1363f44b5a8da: Status 404 returned error can't find the container with id cdd4d56e17e12f2a1183e0fc172de80c3c9296c80dac10c351a1363f44b5a8da Jan 22 10:44:06 crc kubenswrapper[4752]: W0122 10:44:06.594455 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0c65577_767b_4bda_b56d_ac570e6cdbcf.slice/crio-c161e2323c4265c8c6e70320b81c025bfb1da7f0abd431ddf314f783fab959d6 WatchSource:0}: Error finding container c161e2323c4265c8c6e70320b81c025bfb1da7f0abd431ddf314f783fab959d6: Status 404 returned error can't find the container with id c161e2323c4265c8c6e70320b81c025bfb1da7f0abd431ddf314f783fab959d6 Jan 22 10:44:06 crc kubenswrapper[4752]: W0122 10:44:06.600735 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd370da5_83df_42ba_a822_7cff763d174b.slice/crio-89d8c316067b7a9527c1245cd1c3d770efa28ee4d1984508ebcd6ea8551f0f86 WatchSource:0}: Error finding container 89d8c316067b7a9527c1245cd1c3d770efa28ee4d1984508ebcd6ea8551f0f86: Status 404 returned error can't find the container with id 89d8c316067b7a9527c1245cd1c3d770efa28ee4d1984508ebcd6ea8551f0f86 Jan 22 10:44:06 crc kubenswrapper[4752]: W0122 10:44:06.602579 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29162ca8_7eeb_4727_bd25_1e129480c0a9.slice/crio-a015029c45916ed008f6b75f58035ae036b63c865f81422e4d6021685e8420c8 WatchSource:0}: Error finding container a015029c45916ed008f6b75f58035ae036b63c865f81422e4d6021685e8420c8: Status 404 returned error can't find the container with id a015029c45916ed008f6b75f58035ae036b63c865f81422e4d6021685e8420c8 Jan 22 10:44:06 crc kubenswrapper[4752]: I0122 10:44:06.969672 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ff6fd6c5-fl48d" Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.070042 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38fb4345-e766-46b0-85ee-f05095a67208-ovsdbserver-sb\") pod \"38fb4345-e766-46b0-85ee-f05095a67208\" (UID: \"38fb4345-e766-46b0-85ee-f05095a67208\") " Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.070358 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38fb4345-e766-46b0-85ee-f05095a67208-config\") pod \"38fb4345-e766-46b0-85ee-f05095a67208\" (UID: \"38fb4345-e766-46b0-85ee-f05095a67208\") " Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.070464 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38fb4345-e766-46b0-85ee-f05095a67208-ovsdbserver-nb\") pod \"38fb4345-e766-46b0-85ee-f05095a67208\" (UID: \"38fb4345-e766-46b0-85ee-f05095a67208\") " Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.070488 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38fb4345-e766-46b0-85ee-f05095a67208-dns-swift-storage-0\") pod \"38fb4345-e766-46b0-85ee-f05095a67208\" (UID: \"38fb4345-e766-46b0-85ee-f05095a67208\") " Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.070553 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38fb4345-e766-46b0-85ee-f05095a67208-dns-svc\") pod \"38fb4345-e766-46b0-85ee-f05095a67208\" (UID: \"38fb4345-e766-46b0-85ee-f05095a67208\") " Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.070582 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4pl4\" (UniqueName: \"kubernetes.io/projected/38fb4345-e766-46b0-85ee-f05095a67208-kube-api-access-l4pl4\") pod \"38fb4345-e766-46b0-85ee-f05095a67208\" (UID: \"38fb4345-e766-46b0-85ee-f05095a67208\") " Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.092532 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38fb4345-e766-46b0-85ee-f05095a67208-kube-api-access-l4pl4" (OuterVolumeSpecName: "kube-api-access-l4pl4") pod "38fb4345-e766-46b0-85ee-f05095a67208" (UID: "38fb4345-e766-46b0-85ee-f05095a67208"). InnerVolumeSpecName "kube-api-access-l4pl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.114655 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ff6fd6c5-fl48d" Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.139295 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fwcgf" event={"ID":"751b5593-10c9-46a0-bb4d-141ecbc13e10","Type":"ContainerStarted","Data":"7fa89dea8f4a7c8d4b3b348071b713d3bc8b468cfd5059e59be01b1cc580bae8"} Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.139331 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6864464dd5-tktm8" event={"ID":"28a90e4a-ca62-4bd6-bfee-29cfda8b7478","Type":"ContainerStarted","Data":"bb5623936602559956d65127edcfc8ac655b5ab02790441b30485aa9995184f3"} Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.139345 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ff6fd6c5-fl48d" event={"ID":"38fb4345-e766-46b0-85ee-f05095a67208","Type":"ContainerDied","Data":"90b49827746d182fcb070f3c4c4721abd02819d71785547775c23a727a855529"} Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.139370 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd370da5-83df-42ba-a822-7cff763d174b","Type":"ContainerStarted","Data":"89d8c316067b7a9527c1245cd1c3d770efa28ee4d1984508ebcd6ea8551f0f86"} Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.139381 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6vrvk" event={"ID":"e48968c0-ac21-49af-9161-19bf5e37c9eb","Type":"ContainerStarted","Data":"49bf21c0dfd7718587989cef2236dca77a0016bd9f90eef1044d8c01f5db6d17"} Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.139390 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b0c65577-767b-4bda-b56d-ac570e6cdbcf","Type":"ContainerStarted","Data":"c161e2323c4265c8c6e70320b81c025bfb1da7f0abd431ddf314f783fab959d6"} Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.139400 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dd959b98c-qv4gw" event={"ID":"b294f47a-d420-4cee-b974-315f75bb89d5","Type":"ContainerStarted","Data":"7aac070cb790d73fedcb3dada76fe482e922553e20527b4e24cd87f1d0971b84"} Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.139416 4752 scope.go:117] "RemoveContainer" containerID="e4a5726298ade89baaac22c39ada2f2725c25d1a4a5745fbe544fd8f2ed20f6e" Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.145803 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bznl9" event={"ID":"a8ef3108-d8e0-424d-be70-8bcab25d2c0b","Type":"ContainerStarted","Data":"8a02d412d65cc01a5b3a33e6b55a65439eb5b30887be59d9ce804b3e19697c06"} Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.149679 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qht9v" event={"ID":"d6b5da9f-e5d4-4629-89a9-1d215475d3bb","Type":"ContainerStarted","Data":"cdd4d56e17e12f2a1183e0fc172de80c3c9296c80dac10c351a1363f44b5a8da"} Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.153640 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7895f968f5-rgkz6" event={"ID":"29162ca8-7eeb-4727-bd25-1e129480c0a9","Type":"ContainerStarted","Data":"a015029c45916ed008f6b75f58035ae036b63c865f81422e4d6021685e8420c8"} Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.165269 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57957469d5-fx7bl" event={"ID":"3facab56-48f5-4f06-b879-86a9fb933537","Type":"ContainerStarted","Data":"a999a074d6997a7d85ee6440101bb5c7f0e23d0da3d1b5ab87776c41e034ba80"} Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.173391 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4pl4\" (UniqueName: \"kubernetes.io/projected/38fb4345-e766-46b0-85ee-f05095a67208-kube-api-access-l4pl4\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.189805 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"35cf59e4-f205-4f9f-90ba-358c0fb38048","Type":"ContainerStarted","Data":"906031cadc1ec9e41e90618aca46f8bfcc4aa4bd0257893f1bffb234bfaa0cca"} Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.257411 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38fb4345-e766-46b0-85ee-f05095a67208-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "38fb4345-e766-46b0-85ee-f05095a67208" (UID: "38fb4345-e766-46b0-85ee-f05095a67208"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.279275 4752 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38fb4345-e766-46b0-85ee-f05095a67208-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.286046 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38fb4345-e766-46b0-85ee-f05095a67208-config" (OuterVolumeSpecName: "config") pod "38fb4345-e766-46b0-85ee-f05095a67208" (UID: "38fb4345-e766-46b0-85ee-f05095a67208"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.304688 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.306882 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38fb4345-e766-46b0-85ee-f05095a67208-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "38fb4345-e766-46b0-85ee-f05095a67208" (UID: "38fb4345-e766-46b0-85ee-f05095a67208"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.309396 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38fb4345-e766-46b0-85ee-f05095a67208-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "38fb4345-e766-46b0-85ee-f05095a67208" (UID: "38fb4345-e766-46b0-85ee-f05095a67208"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.309726 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.318594 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38fb4345-e766-46b0-85ee-f05095a67208-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "38fb4345-e766-46b0-85ee-f05095a67208" (UID: "38fb4345-e766-46b0-85ee-f05095a67208"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.380982 4752 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38fb4345-e766-46b0-85ee-f05095a67208-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.381019 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38fb4345-e766-46b0-85ee-f05095a67208-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.381030 4752 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38fb4345-e766-46b0-85ee-f05095a67208-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.381041 4752 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38fb4345-e766-46b0-85ee-f05095a67208-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.480033 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ff6fd6c5-fl48d"] Jan 22 10:44:07 crc kubenswrapper[4752]: I0122 10:44:07.502261 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ff6fd6c5-fl48d"] Jan 22 10:44:08 crc kubenswrapper[4752]: I0122 10:44:08.207193 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fwcgf" event={"ID":"751b5593-10c9-46a0-bb4d-141ecbc13e10","Type":"ContainerStarted","Data":"29bc997ed8f486a7121465c5267eda002b975eb11f95217e63829bcb7ea468d1"} Jan 22 10:44:08 crc kubenswrapper[4752]: I0122 10:44:08.224323 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b0c65577-767b-4bda-b56d-ac570e6cdbcf","Type":"ContainerStarted","Data":"51a06c2da658a12c1580e63464f7c470e78ddc007ff53b92684d1360f1395e31"} Jan 22 10:44:08 crc kubenswrapper[4752]: I0122 10:44:08.224383 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b0c65577-767b-4bda-b56d-ac570e6cdbcf","Type":"ContainerStarted","Data":"7732ff5fdf24ffdd50eb12e7705aaf100e410b5c70861102478c1341f0a2eec4"} Jan 22 10:44:08 crc kubenswrapper[4752]: I0122 10:44:08.224550 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="b0c65577-767b-4bda-b56d-ac570e6cdbcf" containerName="watcher-api-log" containerID="cri-o://7732ff5fdf24ffdd50eb12e7705aaf100e410b5c70861102478c1341f0a2eec4" gracePeriod=30 Jan 22 10:44:08 crc kubenswrapper[4752]: I0122 10:44:08.225708 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="b0c65577-767b-4bda-b56d-ac570e6cdbcf" containerName="watcher-api" containerID="cri-o://51a06c2da658a12c1580e63464f7c470e78ddc007ff53b92684d1360f1395e31" gracePeriod=30 Jan 22 10:44:08 crc kubenswrapper[4752]: I0122 10:44:08.225830 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 22 10:44:08 crc kubenswrapper[4752]: I0122 10:44:08.232957 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-fwcgf" podStartSLOduration=8.232936143 podStartE2EDuration="8.232936143s" podCreationTimestamp="2026-01-22 10:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:44:08.222275796 +0000 UTC m=+1127.452218714" watchObservedRunningTime="2026-01-22 10:44:08.232936143 +0000 UTC m=+1127.462879051" Jan 22 10:44:08 crc kubenswrapper[4752]: I0122 10:44:08.252488 4752 generic.go:334] "Generic (PLEG): container finished" podID="b294f47a-d420-4cee-b974-315f75bb89d5" containerID="4d2216276f981dc04e8089edc4a2d6dfcbdba2fb2b8ac05e5aeda5f685347008" exitCode=0 Jan 22 10:44:08 crc kubenswrapper[4752]: I0122 10:44:08.252869 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dd959b98c-qv4gw" event={"ID":"b294f47a-d420-4cee-b974-315f75bb89d5","Type":"ContainerDied","Data":"4d2216276f981dc04e8089edc4a2d6dfcbdba2fb2b8ac05e5aeda5f685347008"} Jan 22 10:44:08 crc kubenswrapper[4752]: I0122 10:44:08.255002 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="b0c65577-767b-4bda-b56d-ac570e6cdbcf" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.161:9322/\": EOF" Jan 22 10:44:08 crc kubenswrapper[4752]: I0122 10:44:08.259207 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=8.259191779 podStartE2EDuration="8.259191779s" podCreationTimestamp="2026-01-22 10:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:44:08.247450495 +0000 UTC m=+1127.477393403" watchObservedRunningTime="2026-01-22 10:44:08.259191779 +0000 UTC m=+1127.489134687" Jan 22 10:44:08 crc kubenswrapper[4752]: I0122 10:44:08.262102 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"03c5e4ed-4c27-447e-b8ea-8853f84742e3","Type":"ContainerStarted","Data":"cb986fc95320ce76b7409a4e3ae4360b86d44aab9c3fa34e41cb7ebff06433a0"} Jan 22 10:44:08 crc kubenswrapper[4752]: I0122 10:44:08.268121 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 22 10:44:08 crc kubenswrapper[4752]: I0122 10:44:08.314530 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=3.566905158 podStartE2EDuration="8.314507772s" podCreationTimestamp="2026-01-22 10:44:00 +0000 UTC" firstStartedPulling="2026-01-22 10:44:01.947768816 +0000 UTC m=+1121.177711724" lastFinishedPulling="2026-01-22 10:44:06.69537143 +0000 UTC m=+1125.925314338" observedRunningTime="2026-01-22 10:44:08.296176394 +0000 UTC m=+1127.526119302" watchObservedRunningTime="2026-01-22 10:44:08.314507772 +0000 UTC m=+1127.544450680" Jan 22 10:44:09 crc kubenswrapper[4752]: I0122 10:44:09.114556 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38fb4345-e766-46b0-85ee-f05095a67208" path="/var/lib/kubelet/pods/38fb4345-e766-46b0-85ee-f05095a67208/volumes" Jan 22 10:44:09 crc kubenswrapper[4752]: I0122 10:44:09.303754 4752 generic.go:334] "Generic (PLEG): container finished" podID="b0c65577-767b-4bda-b56d-ac570e6cdbcf" containerID="7732ff5fdf24ffdd50eb12e7705aaf100e410b5c70861102478c1341f0a2eec4" exitCode=143 Jan 22 10:44:09 crc kubenswrapper[4752]: I0122 10:44:09.303983 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b0c65577-767b-4bda-b56d-ac570e6cdbcf","Type":"ContainerDied","Data":"7732ff5fdf24ffdd50eb12e7705aaf100e410b5c70861102478c1341f0a2eec4"} Jan 22 10:44:09 crc kubenswrapper[4752]: I0122 10:44:09.748473 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7895f968f5-rgkz6"] Jan 22 10:44:09 crc kubenswrapper[4752]: I0122 10:44:09.821614 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7c77556c9d-7cqmw"] Jan 22 10:44:09 crc kubenswrapper[4752]: E0122 10:44:09.822059 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38fb4345-e766-46b0-85ee-f05095a67208" containerName="init" Jan 22 10:44:09 crc kubenswrapper[4752]: I0122 10:44:09.822076 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="38fb4345-e766-46b0-85ee-f05095a67208" containerName="init" Jan 22 10:44:09 crc kubenswrapper[4752]: I0122 10:44:09.822282 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="38fb4345-e766-46b0-85ee-f05095a67208" containerName="init" Jan 22 10:44:09 crc kubenswrapper[4752]: I0122 10:44:09.823200 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c77556c9d-7cqmw" Jan 22 10:44:09 crc kubenswrapper[4752]: I0122 10:44:09.825597 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 22 10:44:09 crc kubenswrapper[4752]: I0122 10:44:09.850686 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7c77556c9d-7cqmw"] Jan 22 10:44:09 crc kubenswrapper[4752]: I0122 10:44:09.865241 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-57957469d5-fx7bl"] Jan 22 10:44:09 crc kubenswrapper[4752]: I0122 10:44:09.971491 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5ba449ad-098c-4918-9403-750b0c29ee93-horizon-secret-key\") pod \"horizon-7c77556c9d-7cqmw\" (UID: \"5ba449ad-098c-4918-9403-750b0c29ee93\") " pod="openstack/horizon-7c77556c9d-7cqmw" Jan 22 10:44:09 crc kubenswrapper[4752]: I0122 10:44:09.971640 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ba449ad-098c-4918-9403-750b0c29ee93-combined-ca-bundle\") pod \"horizon-7c77556c9d-7cqmw\" (UID: \"5ba449ad-098c-4918-9403-750b0c29ee93\") " pod="openstack/horizon-7c77556c9d-7cqmw" Jan 22 10:44:09 crc kubenswrapper[4752]: I0122 10:44:09.971717 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ba449ad-098c-4918-9403-750b0c29ee93-logs\") pod \"horizon-7c77556c9d-7cqmw\" (UID: \"5ba449ad-098c-4918-9403-750b0c29ee93\") " pod="openstack/horizon-7c77556c9d-7cqmw" Jan 22 10:44:09 crc kubenswrapper[4752]: I0122 10:44:09.971900 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5ba449ad-098c-4918-9403-750b0c29ee93-config-data\") pod \"horizon-7c77556c9d-7cqmw\" (UID: \"5ba449ad-098c-4918-9403-750b0c29ee93\") " pod="openstack/horizon-7c77556c9d-7cqmw" Jan 22 10:44:09 crc kubenswrapper[4752]: I0122 10:44:09.971987 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ba449ad-098c-4918-9403-750b0c29ee93-horizon-tls-certs\") pod \"horizon-7c77556c9d-7cqmw\" (UID: \"5ba449ad-098c-4918-9403-750b0c29ee93\") " pod="openstack/horizon-7c77556c9d-7cqmw" Jan 22 10:44:09 crc kubenswrapper[4752]: I0122 10:44:09.972068 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ba449ad-098c-4918-9403-750b0c29ee93-scripts\") pod \"horizon-7c77556c9d-7cqmw\" (UID: \"5ba449ad-098c-4918-9403-750b0c29ee93\") " pod="openstack/horizon-7c77556c9d-7cqmw" Jan 22 10:44:09 crc kubenswrapper[4752]: I0122 10:44:09.972132 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcp6l\" (UniqueName: \"kubernetes.io/projected/5ba449ad-098c-4918-9403-750b0c29ee93-kube-api-access-wcp6l\") pod \"horizon-7c77556c9d-7cqmw\" (UID: \"5ba449ad-098c-4918-9403-750b0c29ee93\") " pod="openstack/horizon-7c77556c9d-7cqmw" Jan 22 10:44:09 crc kubenswrapper[4752]: I0122 10:44:09.977394 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-56b8d5fdb8-7gp4n"] Jan 22 10:44:09 crc kubenswrapper[4752]: I0122 10:44:09.979010 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56b8d5fdb8-7gp4n" Jan 22 10:44:09 crc kubenswrapper[4752]: I0122 10:44:09.988729 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56b8d5fdb8-7gp4n"] Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.077512 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5ba449ad-098c-4918-9403-750b0c29ee93-config-data\") pod \"horizon-7c77556c9d-7cqmw\" (UID: \"5ba449ad-098c-4918-9403-750b0c29ee93\") " pod="openstack/horizon-7c77556c9d-7cqmw" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.077571 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2af09aa4-9ce8-411f-8634-ac7eb7909555-logs\") pod \"horizon-56b8d5fdb8-7gp4n\" (UID: \"2af09aa4-9ce8-411f-8634-ac7eb7909555\") " pod="openstack/horizon-56b8d5fdb8-7gp4n" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.077600 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2af09aa4-9ce8-411f-8634-ac7eb7909555-scripts\") pod \"horizon-56b8d5fdb8-7gp4n\" (UID: \"2af09aa4-9ce8-411f-8634-ac7eb7909555\") " pod="openstack/horizon-56b8d5fdb8-7gp4n" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.077625 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2af09aa4-9ce8-411f-8634-ac7eb7909555-horizon-tls-certs\") pod \"horizon-56b8d5fdb8-7gp4n\" (UID: \"2af09aa4-9ce8-411f-8634-ac7eb7909555\") " pod="openstack/horizon-56b8d5fdb8-7gp4n" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.077799 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ba449ad-098c-4918-9403-750b0c29ee93-horizon-tls-certs\") pod \"horizon-7c77556c9d-7cqmw\" (UID: \"5ba449ad-098c-4918-9403-750b0c29ee93\") " pod="openstack/horizon-7c77556c9d-7cqmw" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.077927 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ba449ad-098c-4918-9403-750b0c29ee93-scripts\") pod \"horizon-7c77556c9d-7cqmw\" (UID: \"5ba449ad-098c-4918-9403-750b0c29ee93\") " pod="openstack/horizon-7c77556c9d-7cqmw" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.077986 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2af09aa4-9ce8-411f-8634-ac7eb7909555-config-data\") pod \"horizon-56b8d5fdb8-7gp4n\" (UID: \"2af09aa4-9ce8-411f-8634-ac7eb7909555\") " pod="openstack/horizon-56b8d5fdb8-7gp4n" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.078030 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcp6l\" (UniqueName: \"kubernetes.io/projected/5ba449ad-098c-4918-9403-750b0c29ee93-kube-api-access-wcp6l\") pod \"horizon-7c77556c9d-7cqmw\" (UID: \"5ba449ad-098c-4918-9403-750b0c29ee93\") " pod="openstack/horizon-7c77556c9d-7cqmw" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.078148 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2af09aa4-9ce8-411f-8634-ac7eb7909555-combined-ca-bundle\") pod \"horizon-56b8d5fdb8-7gp4n\" (UID: \"2af09aa4-9ce8-411f-8634-ac7eb7909555\") " pod="openstack/horizon-56b8d5fdb8-7gp4n" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.078221 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5ba449ad-098c-4918-9403-750b0c29ee93-horizon-secret-key\") pod \"horizon-7c77556c9d-7cqmw\" (UID: \"5ba449ad-098c-4918-9403-750b0c29ee93\") " pod="openstack/horizon-7c77556c9d-7cqmw" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.078245 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2af09aa4-9ce8-411f-8634-ac7eb7909555-horizon-secret-key\") pod \"horizon-56b8d5fdb8-7gp4n\" (UID: \"2af09aa4-9ce8-411f-8634-ac7eb7909555\") " pod="openstack/horizon-56b8d5fdb8-7gp4n" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.078351 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ba449ad-098c-4918-9403-750b0c29ee93-combined-ca-bundle\") pod \"horizon-7c77556c9d-7cqmw\" (UID: \"5ba449ad-098c-4918-9403-750b0c29ee93\") " pod="openstack/horizon-7c77556c9d-7cqmw" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.078397 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt7tf\" (UniqueName: \"kubernetes.io/projected/2af09aa4-9ce8-411f-8634-ac7eb7909555-kube-api-access-jt7tf\") pod \"horizon-56b8d5fdb8-7gp4n\" (UID: \"2af09aa4-9ce8-411f-8634-ac7eb7909555\") " pod="openstack/horizon-56b8d5fdb8-7gp4n" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.078420 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ba449ad-098c-4918-9403-750b0c29ee93-logs\") pod \"horizon-7c77556c9d-7cqmw\" (UID: \"5ba449ad-098c-4918-9403-750b0c29ee93\") " pod="openstack/horizon-7c77556c9d-7cqmw" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.079008 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ba449ad-098c-4918-9403-750b0c29ee93-logs\") pod \"horizon-7c77556c9d-7cqmw\" (UID: \"5ba449ad-098c-4918-9403-750b0c29ee93\") " pod="openstack/horizon-7c77556c9d-7cqmw" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.079125 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ba449ad-098c-4918-9403-750b0c29ee93-scripts\") pod \"horizon-7c77556c9d-7cqmw\" (UID: \"5ba449ad-098c-4918-9403-750b0c29ee93\") " pod="openstack/horizon-7c77556c9d-7cqmw" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.079371 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5ba449ad-098c-4918-9403-750b0c29ee93-config-data\") pod \"horizon-7c77556c9d-7cqmw\" (UID: \"5ba449ad-098c-4918-9403-750b0c29ee93\") " pod="openstack/horizon-7c77556c9d-7cqmw" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.084274 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ba449ad-098c-4918-9403-750b0c29ee93-horizon-tls-certs\") pod \"horizon-7c77556c9d-7cqmw\" (UID: \"5ba449ad-098c-4918-9403-750b0c29ee93\") " pod="openstack/horizon-7c77556c9d-7cqmw" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.085408 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5ba449ad-098c-4918-9403-750b0c29ee93-horizon-secret-key\") pod \"horizon-7c77556c9d-7cqmw\" (UID: \"5ba449ad-098c-4918-9403-750b0c29ee93\") " pod="openstack/horizon-7c77556c9d-7cqmw" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.085945 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ba449ad-098c-4918-9403-750b0c29ee93-combined-ca-bundle\") pod \"horizon-7c77556c9d-7cqmw\" (UID: \"5ba449ad-098c-4918-9403-750b0c29ee93\") " pod="openstack/horizon-7c77556c9d-7cqmw" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.096843 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcp6l\" (UniqueName: \"kubernetes.io/projected/5ba449ad-098c-4918-9403-750b0c29ee93-kube-api-access-wcp6l\") pod \"horizon-7c77556c9d-7cqmw\" (UID: \"5ba449ad-098c-4918-9403-750b0c29ee93\") " pod="openstack/horizon-7c77556c9d-7cqmw" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.161513 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c77556c9d-7cqmw" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.180576 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt7tf\" (UniqueName: \"kubernetes.io/projected/2af09aa4-9ce8-411f-8634-ac7eb7909555-kube-api-access-jt7tf\") pod \"horizon-56b8d5fdb8-7gp4n\" (UID: \"2af09aa4-9ce8-411f-8634-ac7eb7909555\") " pod="openstack/horizon-56b8d5fdb8-7gp4n" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.180641 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2af09aa4-9ce8-411f-8634-ac7eb7909555-logs\") pod \"horizon-56b8d5fdb8-7gp4n\" (UID: \"2af09aa4-9ce8-411f-8634-ac7eb7909555\") " pod="openstack/horizon-56b8d5fdb8-7gp4n" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.180662 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2af09aa4-9ce8-411f-8634-ac7eb7909555-scripts\") pod \"horizon-56b8d5fdb8-7gp4n\" (UID: \"2af09aa4-9ce8-411f-8634-ac7eb7909555\") " pod="openstack/horizon-56b8d5fdb8-7gp4n" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.180679 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2af09aa4-9ce8-411f-8634-ac7eb7909555-horizon-tls-certs\") pod \"horizon-56b8d5fdb8-7gp4n\" (UID: \"2af09aa4-9ce8-411f-8634-ac7eb7909555\") " pod="openstack/horizon-56b8d5fdb8-7gp4n" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.180749 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2af09aa4-9ce8-411f-8634-ac7eb7909555-config-data\") pod \"horizon-56b8d5fdb8-7gp4n\" (UID: \"2af09aa4-9ce8-411f-8634-ac7eb7909555\") " pod="openstack/horizon-56b8d5fdb8-7gp4n" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.180803 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2af09aa4-9ce8-411f-8634-ac7eb7909555-combined-ca-bundle\") pod \"horizon-56b8d5fdb8-7gp4n\" (UID: \"2af09aa4-9ce8-411f-8634-ac7eb7909555\") " pod="openstack/horizon-56b8d5fdb8-7gp4n" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.180866 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2af09aa4-9ce8-411f-8634-ac7eb7909555-horizon-secret-key\") pod \"horizon-56b8d5fdb8-7gp4n\" (UID: \"2af09aa4-9ce8-411f-8634-ac7eb7909555\") " pod="openstack/horizon-56b8d5fdb8-7gp4n" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.181638 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2af09aa4-9ce8-411f-8634-ac7eb7909555-logs\") pod \"horizon-56b8d5fdb8-7gp4n\" (UID: \"2af09aa4-9ce8-411f-8634-ac7eb7909555\") " pod="openstack/horizon-56b8d5fdb8-7gp4n" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.182088 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2af09aa4-9ce8-411f-8634-ac7eb7909555-scripts\") pod \"horizon-56b8d5fdb8-7gp4n\" (UID: \"2af09aa4-9ce8-411f-8634-ac7eb7909555\") " pod="openstack/horizon-56b8d5fdb8-7gp4n" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.183755 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2af09aa4-9ce8-411f-8634-ac7eb7909555-horizon-secret-key\") pod \"horizon-56b8d5fdb8-7gp4n\" (UID: \"2af09aa4-9ce8-411f-8634-ac7eb7909555\") " pod="openstack/horizon-56b8d5fdb8-7gp4n" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.184142 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2af09aa4-9ce8-411f-8634-ac7eb7909555-config-data\") pod \"horizon-56b8d5fdb8-7gp4n\" (UID: \"2af09aa4-9ce8-411f-8634-ac7eb7909555\") " pod="openstack/horizon-56b8d5fdb8-7gp4n" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.195591 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2af09aa4-9ce8-411f-8634-ac7eb7909555-horizon-tls-certs\") pod \"horizon-56b8d5fdb8-7gp4n\" (UID: \"2af09aa4-9ce8-411f-8634-ac7eb7909555\") " pod="openstack/horizon-56b8d5fdb8-7gp4n" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.196423 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2af09aa4-9ce8-411f-8634-ac7eb7909555-combined-ca-bundle\") pod \"horizon-56b8d5fdb8-7gp4n\" (UID: \"2af09aa4-9ce8-411f-8634-ac7eb7909555\") " pod="openstack/horizon-56b8d5fdb8-7gp4n" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.200664 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt7tf\" (UniqueName: \"kubernetes.io/projected/2af09aa4-9ce8-411f-8634-ac7eb7909555-kube-api-access-jt7tf\") pod \"horizon-56b8d5fdb8-7gp4n\" (UID: \"2af09aa4-9ce8-411f-8634-ac7eb7909555\") " pod="openstack/horizon-56b8d5fdb8-7gp4n" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.318432 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dd959b98c-qv4gw" event={"ID":"b294f47a-d420-4cee-b974-315f75bb89d5","Type":"ContainerStarted","Data":"b2208e23dcfebbca325be51b7daa39bc7d089105ec533c28e4bdb26559be9c63"} Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.318489 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-dd959b98c-qv4gw" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.320211 4752 generic.go:334] "Generic (PLEG): container finished" podID="f7c9f151-300c-48c7-b137-6a870ca61b60" containerID="b613bfe657cd8197b9b8f3a2c193072e9763c90ae4811d451e1e726868704b5d" exitCode=0 Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.320241 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4nwnj" event={"ID":"f7c9f151-300c-48c7-b137-6a870ca61b60","Type":"ContainerDied","Data":"b613bfe657cd8197b9b8f3a2c193072e9763c90ae4811d451e1e726868704b5d"} Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.350252 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-dd959b98c-qv4gw" podStartSLOduration=9.350229961 podStartE2EDuration="9.350229961s" podCreationTimestamp="2026-01-22 10:44:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:44:10.343083083 +0000 UTC m=+1129.573026001" watchObservedRunningTime="2026-01-22 10:44:10.350229961 +0000 UTC m=+1129.580172869" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.353816 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56b8d5fdb8-7gp4n" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.681959 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.682001 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Jan 22 10:44:10 crc kubenswrapper[4752]: I0122 10:44:10.718934 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Jan 22 10:44:11 crc kubenswrapper[4752]: I0122 10:44:11.368958 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Jan 22 10:44:11 crc kubenswrapper[4752]: I0122 10:44:11.412635 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Jan 22 10:44:11 crc kubenswrapper[4752]: I0122 10:44:11.823590 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 22 10:44:12 crc kubenswrapper[4752]: I0122 10:44:12.010987 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="b0c65577-767b-4bda-b56d-ac570e6cdbcf" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.161:9322/\": read tcp 10.217.0.2:54714->10.217.0.161:9322: read: connection reset by peer" Jan 22 10:44:12 crc kubenswrapper[4752]: I0122 10:44:12.340561 4752 generic.go:334] "Generic (PLEG): container finished" podID="b0c65577-767b-4bda-b56d-ac570e6cdbcf" containerID="51a06c2da658a12c1580e63464f7c470e78ddc007ff53b92684d1360f1395e31" exitCode=0 Jan 22 10:44:12 crc kubenswrapper[4752]: I0122 10:44:12.340638 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b0c65577-767b-4bda-b56d-ac570e6cdbcf","Type":"ContainerDied","Data":"51a06c2da658a12c1580e63464f7c470e78ddc007ff53b92684d1360f1395e31"} Jan 22 10:44:13 crc kubenswrapper[4752]: I0122 10:44:13.349952 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-applier-0" podUID="03c5e4ed-4c27-447e-b8ea-8853f84742e3" containerName="watcher-applier" containerID="cri-o://cb986fc95320ce76b7409a4e3ae4360b86d44aab9c3fa34e41cb7ebff06433a0" gracePeriod=30 Jan 22 10:44:15 crc kubenswrapper[4752]: E0122 10:44:15.681917 4752 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cb986fc95320ce76b7409a4e3ae4360b86d44aab9c3fa34e41cb7ebff06433a0 is running failed: container process not found" containerID="cb986fc95320ce76b7409a4e3ae4360b86d44aab9c3fa34e41cb7ebff06433a0" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 10:44:15 crc kubenswrapper[4752]: E0122 10:44:15.683229 4752 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cb986fc95320ce76b7409a4e3ae4360b86d44aab9c3fa34e41cb7ebff06433a0 is running failed: container process not found" containerID="cb986fc95320ce76b7409a4e3ae4360b86d44aab9c3fa34e41cb7ebff06433a0" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 10:44:15 crc kubenswrapper[4752]: E0122 10:44:15.683571 4752 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cb986fc95320ce76b7409a4e3ae4360b86d44aab9c3fa34e41cb7ebff06433a0 is running failed: container process not found" containerID="cb986fc95320ce76b7409a4e3ae4360b86d44aab9c3fa34e41cb7ebff06433a0" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 10:44:15 crc kubenswrapper[4752]: E0122 10:44:15.683645 4752 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cb986fc95320ce76b7409a4e3ae4360b86d44aab9c3fa34e41cb7ebff06433a0 is running failed: container process not found" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="03c5e4ed-4c27-447e-b8ea-8853f84742e3" containerName="watcher-applier" Jan 22 10:44:16 crc kubenswrapper[4752]: I0122 10:44:16.382710 4752 generic.go:334] "Generic (PLEG): container finished" podID="03c5e4ed-4c27-447e-b8ea-8853f84742e3" containerID="cb986fc95320ce76b7409a4e3ae4360b86d44aab9c3fa34e41cb7ebff06433a0" exitCode=0 Jan 22 10:44:16 crc kubenswrapper[4752]: I0122 10:44:16.382756 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"03c5e4ed-4c27-447e-b8ea-8853f84742e3","Type":"ContainerDied","Data":"cb986fc95320ce76b7409a4e3ae4360b86d44aab9c3fa34e41cb7ebff06433a0"} Jan 22 10:44:16 crc kubenswrapper[4752]: I0122 10:44:16.822070 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="b0c65577-767b-4bda-b56d-ac570e6cdbcf" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.161:9322/\": dial tcp 10.217.0.161:9322: connect: connection refused" Jan 22 10:44:16 crc kubenswrapper[4752]: I0122 10:44:16.842203 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-dd959b98c-qv4gw" Jan 22 10:44:16 crc kubenswrapper[4752]: I0122 10:44:16.953394 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-946dbfbcf-dtqkt"] Jan 22 10:44:16 crc kubenswrapper[4752]: I0122 10:44:16.953652 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-946dbfbcf-dtqkt" podUID="194c60dd-8bd6-45e5-9e65-62efa4215dd9" containerName="dnsmasq-dns" containerID="cri-o://ed01d30ec393d9c5c34f64597657744e62d121715e967951df0786cd8e15ce0a" gracePeriod=10 Jan 22 10:44:17 crc kubenswrapper[4752]: I0122 10:44:17.399961 4752 generic.go:334] "Generic (PLEG): container finished" podID="194c60dd-8bd6-45e5-9e65-62efa4215dd9" containerID="ed01d30ec393d9c5c34f64597657744e62d121715e967951df0786cd8e15ce0a" exitCode=0 Jan 22 10:44:17 crc kubenswrapper[4752]: I0122 10:44:17.400041 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-946dbfbcf-dtqkt" event={"ID":"194c60dd-8bd6-45e5-9e65-62efa4215dd9","Type":"ContainerDied","Data":"ed01d30ec393d9c5c34f64597657744e62d121715e967951df0786cd8e15ce0a"} Jan 22 10:44:20 crc kubenswrapper[4752]: E0122 10:44:20.682429 4752 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cb986fc95320ce76b7409a4e3ae4360b86d44aab9c3fa34e41cb7ebff06433a0 is running failed: container process not found" containerID="cb986fc95320ce76b7409a4e3ae4360b86d44aab9c3fa34e41cb7ebff06433a0" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 10:44:20 crc kubenswrapper[4752]: E0122 10:44:20.683136 4752 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cb986fc95320ce76b7409a4e3ae4360b86d44aab9c3fa34e41cb7ebff06433a0 is running failed: container process not found" containerID="cb986fc95320ce76b7409a4e3ae4360b86d44aab9c3fa34e41cb7ebff06433a0" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 10:44:20 crc kubenswrapper[4752]: E0122 10:44:20.683458 4752 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cb986fc95320ce76b7409a4e3ae4360b86d44aab9c3fa34e41cb7ebff06433a0 is running failed: container process not found" containerID="cb986fc95320ce76b7409a4e3ae4360b86d44aab9c3fa34e41cb7ebff06433a0" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 10:44:20 crc kubenswrapper[4752]: E0122 10:44:20.683493 4752 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cb986fc95320ce76b7409a4e3ae4360b86d44aab9c3fa34e41cb7ebff06433a0 is running failed: container process not found" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="03c5e4ed-4c27-447e-b8ea-8853f84742e3" containerName="watcher-applier" Jan 22 10:44:21 crc kubenswrapper[4752]: I0122 10:44:21.647588 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-946dbfbcf-dtqkt" podUID="194c60dd-8bd6-45e5-9e65-62efa4215dd9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: connect: connection refused" Jan 22 10:44:21 crc kubenswrapper[4752]: I0122 10:44:21.822569 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="b0c65577-767b-4bda-b56d-ac570e6cdbcf" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.161:9322/\": dial tcp 10.217.0.161:9322: connect: connection refused" Jan 22 10:44:21 crc kubenswrapper[4752]: E0122 10:44:21.847508 4752 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.32:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Jan 22 10:44:21 crc kubenswrapper[4752]: E0122 10:44:21.847585 4752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.32:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Jan 22 10:44:21 crc kubenswrapper[4752]: E0122 10:44:21.847788 4752 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:38.102.83.32:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n684h64ch9bh656h5c4h76h645h544h699h64h65ch5f6h5fbh586h85hd4h5b7h649h5dch569h59ch6h5chd9h654hc4h65bh58bhbbh69hb8hb9q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4tqn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(fd370da5-83df-42ba-a822-7cff763d174b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 10:44:21 crc kubenswrapper[4752]: E0122 10:44:21.872441 4752 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.32:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 22 10:44:21 crc kubenswrapper[4752]: E0122 10:44:21.872480 4752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.32:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 22 10:44:21 crc kubenswrapper[4752]: E0122 10:44:21.872807 4752 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.32:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n669h5f7h556h5fch686h68bhdh5ch54bhch9bh547h584hc4h585h578h5d6h5b9h576h58bh657h688h598hd4h5fdh5fdhf7h58bh5bh546h9ch547q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pttm4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7895f968f5-rgkz6_openstack(29162ca8-7eeb-4727-bd25-1e129480c0a9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 10:44:21 crc kubenswrapper[4752]: E0122 10:44:21.876663 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.32:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-7895f968f5-rgkz6" podUID="29162ca8-7eeb-4727-bd25-1e129480c0a9" Jan 22 10:44:23 crc kubenswrapper[4752]: E0122 10:44:23.450684 4752 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.32:5001/podified-master-centos10/openstack-placement-api:watcher_latest" Jan 22 10:44:23 crc kubenswrapper[4752]: E0122 10:44:23.451089 4752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.32:5001/podified-master-centos10/openstack-placement-api:watcher_latest" Jan 22 10:44:23 crc kubenswrapper[4752]: E0122 10:44:23.451222 4752 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:38.102.83.32:5001/podified-master-centos10/openstack-placement-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kv7bz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-qht9v_openstack(d6b5da9f-e5d4-4629-89a9-1d215475d3bb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 10:44:23 crc kubenswrapper[4752]: E0122 10:44:23.452884 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-qht9v" podUID="d6b5da9f-e5d4-4629-89a9-1d215475d3bb" Jan 22 10:44:23 crc kubenswrapper[4752]: I0122 10:44:23.466154 4752 generic.go:334] "Generic (PLEG): container finished" podID="849caaec-8756-46a5-b544-8e914c0b022b" containerID="c22e2c024ebd331afd1ed05d3365b2ce8a025da7044071fe519e1bfb6951d935" exitCode=0 Jan 22 10:44:23 crc kubenswrapper[4752]: I0122 10:44:23.466342 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2nc8f" event={"ID":"849caaec-8756-46a5-b544-8e914c0b022b","Type":"ContainerDied","Data":"c22e2c024ebd331afd1ed05d3365b2ce8a025da7044071fe519e1bfb6951d935"} Jan 22 10:44:23 crc kubenswrapper[4752]: E0122 10:44:23.491716 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.32:5001/podified-master-centos10/openstack-placement-api:watcher_latest\\\"\"" pod="openstack/placement-db-sync-qht9v" podUID="d6b5da9f-e5d4-4629-89a9-1d215475d3bb" Jan 22 10:44:23 crc kubenswrapper[4752]: I0122 10:44:23.678986 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4nwnj" Jan 22 10:44:23 crc kubenswrapper[4752]: I0122 10:44:23.760940 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7c9f151-300c-48c7-b137-6a870ca61b60-config-data\") pod \"f7c9f151-300c-48c7-b137-6a870ca61b60\" (UID: \"f7c9f151-300c-48c7-b137-6a870ca61b60\") " Jan 22 10:44:23 crc kubenswrapper[4752]: I0122 10:44:23.761009 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7c9f151-300c-48c7-b137-6a870ca61b60-combined-ca-bundle\") pod \"f7c9f151-300c-48c7-b137-6a870ca61b60\" (UID: \"f7c9f151-300c-48c7-b137-6a870ca61b60\") " Jan 22 10:44:23 crc kubenswrapper[4752]: I0122 10:44:23.761045 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f7c9f151-300c-48c7-b137-6a870ca61b60-credential-keys\") pod \"f7c9f151-300c-48c7-b137-6a870ca61b60\" (UID: \"f7c9f151-300c-48c7-b137-6a870ca61b60\") " Jan 22 10:44:23 crc kubenswrapper[4752]: I0122 10:44:23.761079 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7c9f151-300c-48c7-b137-6a870ca61b60-scripts\") pod \"f7c9f151-300c-48c7-b137-6a870ca61b60\" (UID: \"f7c9f151-300c-48c7-b137-6a870ca61b60\") " Jan 22 10:44:23 crc kubenswrapper[4752]: I0122 10:44:23.761143 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f7c9f151-300c-48c7-b137-6a870ca61b60-fernet-keys\") pod \"f7c9f151-300c-48c7-b137-6a870ca61b60\" (UID: \"f7c9f151-300c-48c7-b137-6a870ca61b60\") " Jan 22 10:44:23 crc kubenswrapper[4752]: I0122 10:44:23.761169 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7wd4\" (UniqueName: \"kubernetes.io/projected/f7c9f151-300c-48c7-b137-6a870ca61b60-kube-api-access-d7wd4\") pod \"f7c9f151-300c-48c7-b137-6a870ca61b60\" (UID: \"f7c9f151-300c-48c7-b137-6a870ca61b60\") " Jan 22 10:44:23 crc kubenswrapper[4752]: I0122 10:44:23.770058 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7c9f151-300c-48c7-b137-6a870ca61b60-scripts" (OuterVolumeSpecName: "scripts") pod "f7c9f151-300c-48c7-b137-6a870ca61b60" (UID: "f7c9f151-300c-48c7-b137-6a870ca61b60"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:23 crc kubenswrapper[4752]: I0122 10:44:23.770075 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7c9f151-300c-48c7-b137-6a870ca61b60-kube-api-access-d7wd4" (OuterVolumeSpecName: "kube-api-access-d7wd4") pod "f7c9f151-300c-48c7-b137-6a870ca61b60" (UID: "f7c9f151-300c-48c7-b137-6a870ca61b60"). InnerVolumeSpecName "kube-api-access-d7wd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:44:23 crc kubenswrapper[4752]: I0122 10:44:23.770945 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7c9f151-300c-48c7-b137-6a870ca61b60-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f7c9f151-300c-48c7-b137-6a870ca61b60" (UID: "f7c9f151-300c-48c7-b137-6a870ca61b60"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:23 crc kubenswrapper[4752]: I0122 10:44:23.771494 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7c9f151-300c-48c7-b137-6a870ca61b60-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f7c9f151-300c-48c7-b137-6a870ca61b60" (UID: "f7c9f151-300c-48c7-b137-6a870ca61b60"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:23 crc kubenswrapper[4752]: I0122 10:44:23.792232 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7c9f151-300c-48c7-b137-6a870ca61b60-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7c9f151-300c-48c7-b137-6a870ca61b60" (UID: "f7c9f151-300c-48c7-b137-6a870ca61b60"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:23 crc kubenswrapper[4752]: I0122 10:44:23.797951 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7c9f151-300c-48c7-b137-6a870ca61b60-config-data" (OuterVolumeSpecName: "config-data") pod "f7c9f151-300c-48c7-b137-6a870ca61b60" (UID: "f7c9f151-300c-48c7-b137-6a870ca61b60"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:23 crc kubenswrapper[4752]: I0122 10:44:23.862917 4752 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7c9f151-300c-48c7-b137-6a870ca61b60-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:23 crc kubenswrapper[4752]: I0122 10:44:23.862957 4752 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f7c9f151-300c-48c7-b137-6a870ca61b60-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:23 crc kubenswrapper[4752]: I0122 10:44:23.862969 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7wd4\" (UniqueName: \"kubernetes.io/projected/f7c9f151-300c-48c7-b137-6a870ca61b60-kube-api-access-d7wd4\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:23 crc kubenswrapper[4752]: I0122 10:44:23.862980 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7c9f151-300c-48c7-b137-6a870ca61b60-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:23 crc kubenswrapper[4752]: I0122 10:44:23.862991 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7c9f151-300c-48c7-b137-6a870ca61b60-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:23 crc kubenswrapper[4752]: I0122 10:44:23.863328 4752 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f7c9f151-300c-48c7-b137-6a870ca61b60-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:24 crc kubenswrapper[4752]: E0122 10:44:24.028779 4752 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.32:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Jan 22 10:44:24 crc kubenswrapper[4752]: E0122 10:44:24.028841 4752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.32:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Jan 22 10:44:24 crc kubenswrapper[4752]: E0122 10:44:24.028980 4752 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:38.102.83.32:5001/podified-master-centos10/openstack-barbican-api:watcher_latest,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jk6jd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-bznl9_openstack(a8ef3108-d8e0-424d-be70-8bcab25d2c0b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 10:44:24 crc kubenswrapper[4752]: E0122 10:44:24.030310 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-bznl9" podUID="a8ef3108-d8e0-424d-be70-8bcab25d2c0b" Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.070373 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7895f968f5-rgkz6" Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.269330 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29162ca8-7eeb-4727-bd25-1e129480c0a9-scripts\") pod \"29162ca8-7eeb-4727-bd25-1e129480c0a9\" (UID: \"29162ca8-7eeb-4727-bd25-1e129480c0a9\") " Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.270074 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29162ca8-7eeb-4727-bd25-1e129480c0a9-config-data\") pod \"29162ca8-7eeb-4727-bd25-1e129480c0a9\" (UID: \"29162ca8-7eeb-4727-bd25-1e129480c0a9\") " Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.270210 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pttm4\" (UniqueName: \"kubernetes.io/projected/29162ca8-7eeb-4727-bd25-1e129480c0a9-kube-api-access-pttm4\") pod \"29162ca8-7eeb-4727-bd25-1e129480c0a9\" (UID: \"29162ca8-7eeb-4727-bd25-1e129480c0a9\") " Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.270234 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29162ca8-7eeb-4727-bd25-1e129480c0a9-logs\") pod \"29162ca8-7eeb-4727-bd25-1e129480c0a9\" (UID: \"29162ca8-7eeb-4727-bd25-1e129480c0a9\") " Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.270298 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/29162ca8-7eeb-4727-bd25-1e129480c0a9-horizon-secret-key\") pod \"29162ca8-7eeb-4727-bd25-1e129480c0a9\" (UID: \"29162ca8-7eeb-4727-bd25-1e129480c0a9\") " Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.270730 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29162ca8-7eeb-4727-bd25-1e129480c0a9-logs" (OuterVolumeSpecName: "logs") pod "29162ca8-7eeb-4727-bd25-1e129480c0a9" (UID: "29162ca8-7eeb-4727-bd25-1e129480c0a9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.270849 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29162ca8-7eeb-4727-bd25-1e129480c0a9-config-data" (OuterVolumeSpecName: "config-data") pod "29162ca8-7eeb-4727-bd25-1e129480c0a9" (UID: "29162ca8-7eeb-4727-bd25-1e129480c0a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.275035 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29162ca8-7eeb-4727-bd25-1e129480c0a9-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "29162ca8-7eeb-4727-bd25-1e129480c0a9" (UID: "29162ca8-7eeb-4727-bd25-1e129480c0a9"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.275272 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29162ca8-7eeb-4727-bd25-1e129480c0a9-scripts" (OuterVolumeSpecName: "scripts") pod "29162ca8-7eeb-4727-bd25-1e129480c0a9" (UID: "29162ca8-7eeb-4727-bd25-1e129480c0a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.276917 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29162ca8-7eeb-4727-bd25-1e129480c0a9-kube-api-access-pttm4" (OuterVolumeSpecName: "kube-api-access-pttm4") pod "29162ca8-7eeb-4727-bd25-1e129480c0a9" (UID: "29162ca8-7eeb-4727-bd25-1e129480c0a9"). InnerVolumeSpecName "kube-api-access-pttm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.373227 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29162ca8-7eeb-4727-bd25-1e129480c0a9-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.373314 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pttm4\" (UniqueName: \"kubernetes.io/projected/29162ca8-7eeb-4727-bd25-1e129480c0a9-kube-api-access-pttm4\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.373331 4752 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29162ca8-7eeb-4727-bd25-1e129480c0a9-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.373343 4752 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/29162ca8-7eeb-4727-bd25-1e129480c0a9-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.373384 4752 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29162ca8-7eeb-4727-bd25-1e129480c0a9-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.478602 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4nwnj" event={"ID":"f7c9f151-300c-48c7-b137-6a870ca61b60","Type":"ContainerDied","Data":"c05771f71e3888abfc9d670f67fe71ecc02c313658d20d3b6f709e20eca28455"} Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.478648 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c05771f71e3888abfc9d670f67fe71ecc02c313658d20d3b6f709e20eca28455" Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.478713 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4nwnj" Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.486105 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7895f968f5-rgkz6" event={"ID":"29162ca8-7eeb-4727-bd25-1e129480c0a9","Type":"ContainerDied","Data":"a015029c45916ed008f6b75f58035ae036b63c865f81422e4d6021685e8420c8"} Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.486124 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7895f968f5-rgkz6" Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.488738 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"35cf59e4-f205-4f9f-90ba-358c0fb38048","Type":"ContainerStarted","Data":"d95138dcb4701bec8300195703111efe116428457cbd4fcc9d32122d90cb3b24"} Jan 22 10:44:24 crc kubenswrapper[4752]: E0122 10:44:24.495709 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.32:5001/podified-master-centos10/openstack-barbican-api:watcher_latest\\\"\"" pod="openstack/barbican-db-sync-bznl9" podUID="a8ef3108-d8e0-424d-be70-8bcab25d2c0b" Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.510303 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=21.96086138 podStartE2EDuration="24.510283443s" podCreationTimestamp="2026-01-22 10:44:00 +0000 UTC" firstStartedPulling="2026-01-22 10:44:06.640677072 +0000 UTC m=+1125.870619980" lastFinishedPulling="2026-01-22 10:44:09.190099135 +0000 UTC m=+1128.420042043" observedRunningTime="2026-01-22 10:44:24.510214781 +0000 UTC m=+1143.740157689" watchObservedRunningTime="2026-01-22 10:44:24.510283443 +0000 UTC m=+1143.740226351" Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.581085 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7895f968f5-rgkz6"] Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.593688 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7895f968f5-rgkz6"] Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.785052 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-4nwnj"] Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.791868 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-4nwnj"] Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.877801 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-hx4lb"] Jan 22 10:44:24 crc kubenswrapper[4752]: E0122 10:44:24.878435 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c9f151-300c-48c7-b137-6a870ca61b60" containerName="keystone-bootstrap" Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.878460 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c9f151-300c-48c7-b137-6a870ca61b60" containerName="keystone-bootstrap" Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.878751 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c9f151-300c-48c7-b137-6a870ca61b60" containerName="keystone-bootstrap" Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.879637 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hx4lb" Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.883451 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.883496 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.883531 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.883630 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.886892 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bj8bx" Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.889009 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hx4lb"] Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.985517 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-scripts\") pod \"keystone-bootstrap-hx4lb\" (UID: \"46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b\") " pod="openstack/keystone-bootstrap-hx4lb" Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.985563 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-fernet-keys\") pod \"keystone-bootstrap-hx4lb\" (UID: \"46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b\") " pod="openstack/keystone-bootstrap-hx4lb" Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.985588 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9tr9\" (UniqueName: \"kubernetes.io/projected/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-kube-api-access-k9tr9\") pod \"keystone-bootstrap-hx4lb\" (UID: \"46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b\") " pod="openstack/keystone-bootstrap-hx4lb" Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.986108 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-config-data\") pod \"keystone-bootstrap-hx4lb\" (UID: \"46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b\") " pod="openstack/keystone-bootstrap-hx4lb" Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.986242 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-credential-keys\") pod \"keystone-bootstrap-hx4lb\" (UID: \"46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b\") " pod="openstack/keystone-bootstrap-hx4lb" Jan 22 10:44:24 crc kubenswrapper[4752]: I0122 10:44:24.986288 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-combined-ca-bundle\") pod \"keystone-bootstrap-hx4lb\" (UID: \"46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b\") " pod="openstack/keystone-bootstrap-hx4lb" Jan 22 10:44:25 crc kubenswrapper[4752]: I0122 10:44:25.087902 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-credential-keys\") pod \"keystone-bootstrap-hx4lb\" (UID: \"46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b\") " pod="openstack/keystone-bootstrap-hx4lb" Jan 22 10:44:25 crc kubenswrapper[4752]: I0122 10:44:25.088471 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-combined-ca-bundle\") pod \"keystone-bootstrap-hx4lb\" (UID: \"46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b\") " pod="openstack/keystone-bootstrap-hx4lb" Jan 22 10:44:25 crc kubenswrapper[4752]: I0122 10:44:25.088499 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-scripts\") pod \"keystone-bootstrap-hx4lb\" (UID: \"46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b\") " pod="openstack/keystone-bootstrap-hx4lb" Jan 22 10:44:25 crc kubenswrapper[4752]: I0122 10:44:25.088519 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-fernet-keys\") pod \"keystone-bootstrap-hx4lb\" (UID: \"46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b\") " pod="openstack/keystone-bootstrap-hx4lb" Jan 22 10:44:25 crc kubenswrapper[4752]: I0122 10:44:25.088543 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9tr9\" (UniqueName: \"kubernetes.io/projected/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-kube-api-access-k9tr9\") pod \"keystone-bootstrap-hx4lb\" (UID: \"46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b\") " pod="openstack/keystone-bootstrap-hx4lb" Jan 22 10:44:25 crc kubenswrapper[4752]: I0122 10:44:25.088563 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-config-data\") pod \"keystone-bootstrap-hx4lb\" (UID: \"46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b\") " pod="openstack/keystone-bootstrap-hx4lb" Jan 22 10:44:25 crc kubenswrapper[4752]: I0122 10:44:25.096087 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-scripts\") pod \"keystone-bootstrap-hx4lb\" (UID: \"46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b\") " pod="openstack/keystone-bootstrap-hx4lb" Jan 22 10:44:25 crc kubenswrapper[4752]: I0122 10:44:25.096470 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-fernet-keys\") pod \"keystone-bootstrap-hx4lb\" (UID: \"46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b\") " pod="openstack/keystone-bootstrap-hx4lb" Jan 22 10:44:25 crc kubenswrapper[4752]: I0122 10:44:25.096910 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-combined-ca-bundle\") pod \"keystone-bootstrap-hx4lb\" (UID: \"46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b\") " pod="openstack/keystone-bootstrap-hx4lb" Jan 22 10:44:25 crc kubenswrapper[4752]: I0122 10:44:25.104281 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-config-data\") pod \"keystone-bootstrap-hx4lb\" (UID: \"46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b\") " pod="openstack/keystone-bootstrap-hx4lb" Jan 22 10:44:25 crc kubenswrapper[4752]: I0122 10:44:25.106425 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-credential-keys\") pod \"keystone-bootstrap-hx4lb\" (UID: \"46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b\") " pod="openstack/keystone-bootstrap-hx4lb" Jan 22 10:44:25 crc kubenswrapper[4752]: I0122 10:44:25.114564 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29162ca8-7eeb-4727-bd25-1e129480c0a9" path="/var/lib/kubelet/pods/29162ca8-7eeb-4727-bd25-1e129480c0a9/volumes" Jan 22 10:44:25 crc kubenswrapper[4752]: I0122 10:44:25.115123 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9tr9\" (UniqueName: \"kubernetes.io/projected/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-kube-api-access-k9tr9\") pod \"keystone-bootstrap-hx4lb\" (UID: \"46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b\") " pod="openstack/keystone-bootstrap-hx4lb" Jan 22 10:44:25 crc kubenswrapper[4752]: I0122 10:44:25.115487 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7c9f151-300c-48c7-b137-6a870ca61b60" path="/var/lib/kubelet/pods/f7c9f151-300c-48c7-b137-6a870ca61b60/volumes" Jan 22 10:44:25 crc kubenswrapper[4752]: I0122 10:44:25.209170 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hx4lb" Jan 22 10:44:25 crc kubenswrapper[4752]: E0122 10:44:25.682504 4752 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cb986fc95320ce76b7409a4e3ae4360b86d44aab9c3fa34e41cb7ebff06433a0 is running failed: container process not found" containerID="cb986fc95320ce76b7409a4e3ae4360b86d44aab9c3fa34e41cb7ebff06433a0" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 10:44:25 crc kubenswrapper[4752]: E0122 10:44:25.683090 4752 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cb986fc95320ce76b7409a4e3ae4360b86d44aab9c3fa34e41cb7ebff06433a0 is running failed: container process not found" containerID="cb986fc95320ce76b7409a4e3ae4360b86d44aab9c3fa34e41cb7ebff06433a0" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 10:44:25 crc kubenswrapper[4752]: E0122 10:44:25.683579 4752 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cb986fc95320ce76b7409a4e3ae4360b86d44aab9c3fa34e41cb7ebff06433a0 is running failed: container process not found" containerID="cb986fc95320ce76b7409a4e3ae4360b86d44aab9c3fa34e41cb7ebff06433a0" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 10:44:25 crc kubenswrapper[4752]: E0122 10:44:25.683663 4752 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cb986fc95320ce76b7409a4e3ae4360b86d44aab9c3fa34e41cb7ebff06433a0 is running failed: container process not found" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="03c5e4ed-4c27-447e-b8ea-8853f84742e3" containerName="watcher-applier" Jan 22 10:44:29 crc kubenswrapper[4752]: E0122 10:44:29.096502 4752 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35cf59e4_f205_4f9f_90ba_358c0fb38048.slice/crio-conmon-d95138dcb4701bec8300195703111efe116428457cbd4fcc9d32122d90cb3b24.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35cf59e4_f205_4f9f_90ba_358c0fb38048.slice/crio-d95138dcb4701bec8300195703111efe116428457cbd4fcc9d32122d90cb3b24.scope\": RecentStats: unable to find data in memory cache]" Jan 22 10:44:29 crc kubenswrapper[4752]: I0122 10:44:29.551332 4752 generic.go:334] "Generic (PLEG): container finished" podID="35cf59e4-f205-4f9f-90ba-358c0fb38048" containerID="d95138dcb4701bec8300195703111efe116428457cbd4fcc9d32122d90cb3b24" exitCode=1 Jan 22 10:44:29 crc kubenswrapper[4752]: I0122 10:44:29.551379 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"35cf59e4-f205-4f9f-90ba-358c0fb38048","Type":"ContainerDied","Data":"d95138dcb4701bec8300195703111efe116428457cbd4fcc9d32122d90cb3b24"} Jan 22 10:44:29 crc kubenswrapper[4752]: I0122 10:44:29.551974 4752 scope.go:117] "RemoveContainer" containerID="d95138dcb4701bec8300195703111efe116428457cbd4fcc9d32122d90cb3b24" Jan 22 10:44:30 crc kubenswrapper[4752]: E0122 10:44:30.681799 4752 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cb986fc95320ce76b7409a4e3ae4360b86d44aab9c3fa34e41cb7ebff06433a0 is running failed: container process not found" containerID="cb986fc95320ce76b7409a4e3ae4360b86d44aab9c3fa34e41cb7ebff06433a0" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 10:44:30 crc kubenswrapper[4752]: E0122 10:44:30.682607 4752 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cb986fc95320ce76b7409a4e3ae4360b86d44aab9c3fa34e41cb7ebff06433a0 is running failed: container process not found" containerID="cb986fc95320ce76b7409a4e3ae4360b86d44aab9c3fa34e41cb7ebff06433a0" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 10:44:30 crc kubenswrapper[4752]: E0122 10:44:30.683277 4752 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cb986fc95320ce76b7409a4e3ae4360b86d44aab9c3fa34e41cb7ebff06433a0 is running failed: container process not found" containerID="cb986fc95320ce76b7409a4e3ae4360b86d44aab9c3fa34e41cb7ebff06433a0" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 10:44:30 crc kubenswrapper[4752]: E0122 10:44:30.683314 4752 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cb986fc95320ce76b7409a4e3ae4360b86d44aab9c3fa34e41cb7ebff06433a0 is running failed: container process not found" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="03c5e4ed-4c27-447e-b8ea-8853f84742e3" containerName="watcher-applier" Jan 22 10:44:30 crc kubenswrapper[4752]: I0122 10:44:30.982624 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 22 10:44:30 crc kubenswrapper[4752]: I0122 10:44:30.982744 4752 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 22 10:44:30 crc kubenswrapper[4752]: I0122 10:44:30.982768 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 22 10:44:30 crc kubenswrapper[4752]: I0122 10:44:30.982787 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 22 10:44:31 crc kubenswrapper[4752]: I0122 10:44:31.647672 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-946dbfbcf-dtqkt" podUID="194c60dd-8bd6-45e5-9e65-62efa4215dd9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: i/o timeout" Jan 22 10:44:31 crc kubenswrapper[4752]: I0122 10:44:31.822363 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="b0c65577-767b-4bda-b56d-ac570e6cdbcf" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.161:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.362028 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-946dbfbcf-dtqkt" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.370240 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.385546 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.385644 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2nc8f" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.502128 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/194c60dd-8bd6-45e5-9e65-62efa4215dd9-dns-svc\") pod \"194c60dd-8bd6-45e5-9e65-62efa4215dd9\" (UID: \"194c60dd-8bd6-45e5-9e65-62efa4215dd9\") " Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.502176 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/194c60dd-8bd6-45e5-9e65-62efa4215dd9-dns-swift-storage-0\") pod \"194c60dd-8bd6-45e5-9e65-62efa4215dd9\" (UID: \"194c60dd-8bd6-45e5-9e65-62efa4215dd9\") " Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.502231 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03c5e4ed-4c27-447e-b8ea-8853f84742e3-combined-ca-bundle\") pod \"03c5e4ed-4c27-447e-b8ea-8853f84742e3\" (UID: \"03c5e4ed-4c27-447e-b8ea-8853f84742e3\") " Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.502301 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/194c60dd-8bd6-45e5-9e65-62efa4215dd9-ovsdbserver-nb\") pod \"194c60dd-8bd6-45e5-9e65-62efa4215dd9\" (UID: \"194c60dd-8bd6-45e5-9e65-62efa4215dd9\") " Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.502331 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxpqg\" (UniqueName: \"kubernetes.io/projected/849caaec-8756-46a5-b544-8e914c0b022b-kube-api-access-fxpqg\") pod \"849caaec-8756-46a5-b544-8e914c0b022b\" (UID: \"849caaec-8756-46a5-b544-8e914c0b022b\") " Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.503108 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tw7k\" (UniqueName: \"kubernetes.io/projected/03c5e4ed-4c27-447e-b8ea-8853f84742e3-kube-api-access-6tw7k\") pod \"03c5e4ed-4c27-447e-b8ea-8853f84742e3\" (UID: \"03c5e4ed-4c27-447e-b8ea-8853f84742e3\") " Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.503174 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/849caaec-8756-46a5-b544-8e914c0b022b-combined-ca-bundle\") pod \"849caaec-8756-46a5-b544-8e914c0b022b\" (UID: \"849caaec-8756-46a5-b544-8e914c0b022b\") " Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.503196 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/849caaec-8756-46a5-b544-8e914c0b022b-db-sync-config-data\") pod \"849caaec-8756-46a5-b544-8e914c0b022b\" (UID: \"849caaec-8756-46a5-b544-8e914c0b022b\") " Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.503250 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03c5e4ed-4c27-447e-b8ea-8853f84742e3-logs\") pod \"03c5e4ed-4c27-447e-b8ea-8853f84742e3\" (UID: \"03c5e4ed-4c27-447e-b8ea-8853f84742e3\") " Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.503275 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/194c60dd-8bd6-45e5-9e65-62efa4215dd9-ovsdbserver-sb\") pod \"194c60dd-8bd6-45e5-9e65-62efa4215dd9\" (UID: \"194c60dd-8bd6-45e5-9e65-62efa4215dd9\") " Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.503301 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz7zg\" (UniqueName: \"kubernetes.io/projected/b0c65577-767b-4bda-b56d-ac570e6cdbcf-kube-api-access-sz7zg\") pod \"b0c65577-767b-4bda-b56d-ac570e6cdbcf\" (UID: \"b0c65577-767b-4bda-b56d-ac570e6cdbcf\") " Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.503349 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0c65577-767b-4bda-b56d-ac570e6cdbcf-config-data\") pod \"b0c65577-767b-4bda-b56d-ac570e6cdbcf\" (UID: \"b0c65577-767b-4bda-b56d-ac570e6cdbcf\") " Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.503380 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b0c65577-767b-4bda-b56d-ac570e6cdbcf-custom-prometheus-ca\") pod \"b0c65577-767b-4bda-b56d-ac570e6cdbcf\" (UID: \"b0c65577-767b-4bda-b56d-ac570e6cdbcf\") " Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.503433 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03c5e4ed-4c27-447e-b8ea-8853f84742e3-config-data\") pod \"03c5e4ed-4c27-447e-b8ea-8853f84742e3\" (UID: \"03c5e4ed-4c27-447e-b8ea-8853f84742e3\") " Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.503606 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/849caaec-8756-46a5-b544-8e914c0b022b-config-data\") pod \"849caaec-8756-46a5-b544-8e914c0b022b\" (UID: \"849caaec-8756-46a5-b544-8e914c0b022b\") " Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.503629 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0c65577-767b-4bda-b56d-ac570e6cdbcf-logs\") pod \"b0c65577-767b-4bda-b56d-ac570e6cdbcf\" (UID: \"b0c65577-767b-4bda-b56d-ac570e6cdbcf\") " Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.503689 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c65577-767b-4bda-b56d-ac570e6cdbcf-combined-ca-bundle\") pod \"b0c65577-767b-4bda-b56d-ac570e6cdbcf\" (UID: \"b0c65577-767b-4bda-b56d-ac570e6cdbcf\") " Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.503723 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44qsf\" (UniqueName: \"kubernetes.io/projected/194c60dd-8bd6-45e5-9e65-62efa4215dd9-kube-api-access-44qsf\") pod \"194c60dd-8bd6-45e5-9e65-62efa4215dd9\" (UID: \"194c60dd-8bd6-45e5-9e65-62efa4215dd9\") " Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.503775 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/194c60dd-8bd6-45e5-9e65-62efa4215dd9-config\") pod \"194c60dd-8bd6-45e5-9e65-62efa4215dd9\" (UID: \"194c60dd-8bd6-45e5-9e65-62efa4215dd9\") " Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.504736 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03c5e4ed-4c27-447e-b8ea-8853f84742e3-logs" (OuterVolumeSpecName: "logs") pod "03c5e4ed-4c27-447e-b8ea-8853f84742e3" (UID: "03c5e4ed-4c27-447e-b8ea-8853f84742e3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.505021 4752 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03c5e4ed-4c27-447e-b8ea-8853f84742e3-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.508594 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/194c60dd-8bd6-45e5-9e65-62efa4215dd9-kube-api-access-44qsf" (OuterVolumeSpecName: "kube-api-access-44qsf") pod "194c60dd-8bd6-45e5-9e65-62efa4215dd9" (UID: "194c60dd-8bd6-45e5-9e65-62efa4215dd9"). InnerVolumeSpecName "kube-api-access-44qsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.509684 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03c5e4ed-4c27-447e-b8ea-8853f84742e3-kube-api-access-6tw7k" (OuterVolumeSpecName: "kube-api-access-6tw7k") pod "03c5e4ed-4c27-447e-b8ea-8853f84742e3" (UID: "03c5e4ed-4c27-447e-b8ea-8853f84742e3"). InnerVolumeSpecName "kube-api-access-6tw7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.511219 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/849caaec-8756-46a5-b544-8e914c0b022b-kube-api-access-fxpqg" (OuterVolumeSpecName: "kube-api-access-fxpqg") pod "849caaec-8756-46a5-b544-8e914c0b022b" (UID: "849caaec-8756-46a5-b544-8e914c0b022b"). InnerVolumeSpecName "kube-api-access-fxpqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.514079 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/849caaec-8756-46a5-b544-8e914c0b022b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "849caaec-8756-46a5-b544-8e914c0b022b" (UID: "849caaec-8756-46a5-b544-8e914c0b022b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.514590 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0c65577-767b-4bda-b56d-ac570e6cdbcf-logs" (OuterVolumeSpecName: "logs") pod "b0c65577-767b-4bda-b56d-ac570e6cdbcf" (UID: "b0c65577-767b-4bda-b56d-ac570e6cdbcf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.520319 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0c65577-767b-4bda-b56d-ac570e6cdbcf-kube-api-access-sz7zg" (OuterVolumeSpecName: "kube-api-access-sz7zg") pod "b0c65577-767b-4bda-b56d-ac570e6cdbcf" (UID: "b0c65577-767b-4bda-b56d-ac570e6cdbcf"). InnerVolumeSpecName "kube-api-access-sz7zg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.590201 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/849caaec-8756-46a5-b544-8e914c0b022b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "849caaec-8756-46a5-b544-8e914c0b022b" (UID: "849caaec-8756-46a5-b544-8e914c0b022b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.590706 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/849caaec-8756-46a5-b544-8e914c0b022b-config-data" (OuterVolumeSpecName: "config-data") pod "849caaec-8756-46a5-b544-8e914c0b022b" (UID: "849caaec-8756-46a5-b544-8e914c0b022b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.596031 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0c65577-767b-4bda-b56d-ac570e6cdbcf-config-data" (OuterVolumeSpecName: "config-data") pod "b0c65577-767b-4bda-b56d-ac570e6cdbcf" (UID: "b0c65577-767b-4bda-b56d-ac570e6cdbcf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.600739 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03c5e4ed-4c27-447e-b8ea-8853f84742e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03c5e4ed-4c27-447e-b8ea-8853f84742e3" (UID: "03c5e4ed-4c27-447e-b8ea-8853f84742e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.606800 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/849caaec-8756-46a5-b544-8e914c0b022b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.606872 4752 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/849caaec-8756-46a5-b544-8e914c0b022b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.606889 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz7zg\" (UniqueName: \"kubernetes.io/projected/b0c65577-767b-4bda-b56d-ac570e6cdbcf-kube-api-access-sz7zg\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.606907 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0c65577-767b-4bda-b56d-ac570e6cdbcf-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.606948 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/849caaec-8756-46a5-b544-8e914c0b022b-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.606961 4752 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0c65577-767b-4bda-b56d-ac570e6cdbcf-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.606973 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44qsf\" (UniqueName: \"kubernetes.io/projected/194c60dd-8bd6-45e5-9e65-62efa4215dd9-kube-api-access-44qsf\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.606986 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03c5e4ed-4c27-447e-b8ea-8853f84742e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.607026 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxpqg\" (UniqueName: \"kubernetes.io/projected/849caaec-8756-46a5-b544-8e914c0b022b-kube-api-access-fxpqg\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.607039 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tw7k\" (UniqueName: \"kubernetes.io/projected/03c5e4ed-4c27-447e-b8ea-8853f84742e3-kube-api-access-6tw7k\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.607827 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0c65577-767b-4bda-b56d-ac570e6cdbcf-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "b0c65577-767b-4bda-b56d-ac570e6cdbcf" (UID: "b0c65577-767b-4bda-b56d-ac570e6cdbcf"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.611967 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-946dbfbcf-dtqkt" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.611966 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-946dbfbcf-dtqkt" event={"ID":"194c60dd-8bd6-45e5-9e65-62efa4215dd9","Type":"ContainerDied","Data":"f2e25ff2744eddece962a4160f1125c4aaa1bc8a63e29d07a556a905bedb7194"} Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.612055 4752 scope.go:117] "RemoveContainer" containerID="ed01d30ec393d9c5c34f64597657744e62d121715e967951df0786cd8e15ce0a" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.614723 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2nc8f" event={"ID":"849caaec-8756-46a5-b544-8e914c0b022b","Type":"ContainerDied","Data":"9f23937f5e945b6d1aa2a8955122268d39f20998508e6615d1bc6164eb088afb"} Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.614746 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f23937f5e945b6d1aa2a8955122268d39f20998508e6615d1bc6164eb088afb" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.614798 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2nc8f" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.620802 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/194c60dd-8bd6-45e5-9e65-62efa4215dd9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "194c60dd-8bd6-45e5-9e65-62efa4215dd9" (UID: "194c60dd-8bd6-45e5-9e65-62efa4215dd9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.621385 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b0c65577-767b-4bda-b56d-ac570e6cdbcf","Type":"ContainerDied","Data":"c161e2323c4265c8c6e70320b81c025bfb1da7f0abd431ddf314f783fab959d6"} Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.622712 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.630322 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0c65577-767b-4bda-b56d-ac570e6cdbcf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0c65577-767b-4bda-b56d-ac570e6cdbcf" (UID: "b0c65577-767b-4bda-b56d-ac570e6cdbcf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.631428 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"03c5e4ed-4c27-447e-b8ea-8853f84742e3","Type":"ContainerDied","Data":"2d087d4062a1128a9ebb235e63ad91a5abe0a88ce8de069de367fd2bfad99f0d"} Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.631511 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.632489 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03c5e4ed-4c27-447e-b8ea-8853f84742e3-config-data" (OuterVolumeSpecName: "config-data") pod "03c5e4ed-4c27-447e-b8ea-8853f84742e3" (UID: "03c5e4ed-4c27-447e-b8ea-8853f84742e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.639949 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/194c60dd-8bd6-45e5-9e65-62efa4215dd9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "194c60dd-8bd6-45e5-9e65-62efa4215dd9" (UID: "194c60dd-8bd6-45e5-9e65-62efa4215dd9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.641108 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56b8d5fdb8-7gp4n"] Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.641794 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/194c60dd-8bd6-45e5-9e65-62efa4215dd9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "194c60dd-8bd6-45e5-9e65-62efa4215dd9" (UID: "194c60dd-8bd6-45e5-9e65-62efa4215dd9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.652363 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/194c60dd-8bd6-45e5-9e65-62efa4215dd9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "194c60dd-8bd6-45e5-9e65-62efa4215dd9" (UID: "194c60dd-8bd6-45e5-9e65-62efa4215dd9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.658321 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/194c60dd-8bd6-45e5-9e65-62efa4215dd9-config" (OuterVolumeSpecName: "config") pod "194c60dd-8bd6-45e5-9e65-62efa4215dd9" (UID: "194c60dd-8bd6-45e5-9e65-62efa4215dd9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.708740 4752 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/194c60dd-8bd6-45e5-9e65-62efa4215dd9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.708779 4752 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b0c65577-767b-4bda-b56d-ac570e6cdbcf-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.708789 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03c5e4ed-4c27-447e-b8ea-8853f84742e3-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.708797 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c65577-767b-4bda-b56d-ac570e6cdbcf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.708818 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/194c60dd-8bd6-45e5-9e65-62efa4215dd9-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.708826 4752 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/194c60dd-8bd6-45e5-9e65-62efa4215dd9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.708834 4752 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/194c60dd-8bd6-45e5-9e65-62efa4215dd9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.708841 4752 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/194c60dd-8bd6-45e5-9e65-62efa4215dd9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:35 crc kubenswrapper[4752]: I0122 10:44:35.974016 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-946dbfbcf-dtqkt"] Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.004375 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-946dbfbcf-dtqkt"] Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.015629 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.025204 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.034396 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.045013 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-applier-0"] Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.051912 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Jan 22 10:44:36 crc kubenswrapper[4752]: E0122 10:44:36.052368 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0c65577-767b-4bda-b56d-ac570e6cdbcf" containerName="watcher-api-log" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.052384 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0c65577-767b-4bda-b56d-ac570e6cdbcf" containerName="watcher-api-log" Jan 22 10:44:36 crc kubenswrapper[4752]: E0122 10:44:36.052402 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="849caaec-8756-46a5-b544-8e914c0b022b" containerName="glance-db-sync" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.052408 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="849caaec-8756-46a5-b544-8e914c0b022b" containerName="glance-db-sync" Jan 22 10:44:36 crc kubenswrapper[4752]: E0122 10:44:36.052422 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="194c60dd-8bd6-45e5-9e65-62efa4215dd9" containerName="dnsmasq-dns" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.052428 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="194c60dd-8bd6-45e5-9e65-62efa4215dd9" containerName="dnsmasq-dns" Jan 22 10:44:36 crc kubenswrapper[4752]: E0122 10:44:36.052437 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03c5e4ed-4c27-447e-b8ea-8853f84742e3" containerName="watcher-applier" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.052442 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="03c5e4ed-4c27-447e-b8ea-8853f84742e3" containerName="watcher-applier" Jan 22 10:44:36 crc kubenswrapper[4752]: E0122 10:44:36.052452 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0c65577-767b-4bda-b56d-ac570e6cdbcf" containerName="watcher-api" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.052458 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0c65577-767b-4bda-b56d-ac570e6cdbcf" containerName="watcher-api" Jan 22 10:44:36 crc kubenswrapper[4752]: E0122 10:44:36.052472 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="194c60dd-8bd6-45e5-9e65-62efa4215dd9" containerName="init" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.052480 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="194c60dd-8bd6-45e5-9e65-62efa4215dd9" containerName="init" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.052712 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0c65577-767b-4bda-b56d-ac570e6cdbcf" containerName="watcher-api-log" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.052741 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="03c5e4ed-4c27-447e-b8ea-8853f84742e3" containerName="watcher-applier" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.052759 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="194c60dd-8bd6-45e5-9e65-62efa4215dd9" containerName="dnsmasq-dns" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.052776 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0c65577-767b-4bda-b56d-ac570e6cdbcf" containerName="watcher-api" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.052797 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="849caaec-8756-46a5-b544-8e914c0b022b" containerName="glance-db-sync" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.053870 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.056262 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.060157 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.061614 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.063257 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.069254 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.081996 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.219563 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/803118ee-9e61-4f5d-adeb-d9114692b054-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"803118ee-9e61-4f5d-adeb-d9114692b054\") " pod="openstack/watcher-applier-0" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.220039 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/803118ee-9e61-4f5d-adeb-d9114692b054-logs\") pod \"watcher-applier-0\" (UID: \"803118ee-9e61-4f5d-adeb-d9114692b054\") " pod="openstack/watcher-applier-0" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.220057 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/803118ee-9e61-4f5d-adeb-d9114692b054-config-data\") pod \"watcher-applier-0\" (UID: \"803118ee-9e61-4f5d-adeb-d9114692b054\") " pod="openstack/watcher-applier-0" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.220088 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbz8k\" (UniqueName: \"kubernetes.io/projected/01b27ce5-269a-4d59-b7bf-51a805357d4a-kube-api-access-rbz8k\") pod \"watcher-api-0\" (UID: \"01b27ce5-269a-4d59-b7bf-51a805357d4a\") " pod="openstack/watcher-api-0" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.220111 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/01b27ce5-269a-4d59-b7bf-51a805357d4a-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"01b27ce5-269a-4d59-b7bf-51a805357d4a\") " pod="openstack/watcher-api-0" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.220136 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zglsh\" (UniqueName: \"kubernetes.io/projected/803118ee-9e61-4f5d-adeb-d9114692b054-kube-api-access-zglsh\") pod \"watcher-applier-0\" (UID: \"803118ee-9e61-4f5d-adeb-d9114692b054\") " pod="openstack/watcher-applier-0" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.220167 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01b27ce5-269a-4d59-b7bf-51a805357d4a-logs\") pod \"watcher-api-0\" (UID: \"01b27ce5-269a-4d59-b7bf-51a805357d4a\") " pod="openstack/watcher-api-0" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.220215 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01b27ce5-269a-4d59-b7bf-51a805357d4a-config-data\") pod \"watcher-api-0\" (UID: \"01b27ce5-269a-4d59-b7bf-51a805357d4a\") " pod="openstack/watcher-api-0" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.220298 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01b27ce5-269a-4d59-b7bf-51a805357d4a-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"01b27ce5-269a-4d59-b7bf-51a805357d4a\") " pod="openstack/watcher-api-0" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.321898 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zglsh\" (UniqueName: \"kubernetes.io/projected/803118ee-9e61-4f5d-adeb-d9114692b054-kube-api-access-zglsh\") pod \"watcher-applier-0\" (UID: \"803118ee-9e61-4f5d-adeb-d9114692b054\") " pod="openstack/watcher-applier-0" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.321954 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01b27ce5-269a-4d59-b7bf-51a805357d4a-logs\") pod \"watcher-api-0\" (UID: \"01b27ce5-269a-4d59-b7bf-51a805357d4a\") " pod="openstack/watcher-api-0" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.321994 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01b27ce5-269a-4d59-b7bf-51a805357d4a-config-data\") pod \"watcher-api-0\" (UID: \"01b27ce5-269a-4d59-b7bf-51a805357d4a\") " pod="openstack/watcher-api-0" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.322050 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01b27ce5-269a-4d59-b7bf-51a805357d4a-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"01b27ce5-269a-4d59-b7bf-51a805357d4a\") " pod="openstack/watcher-api-0" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.322106 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/803118ee-9e61-4f5d-adeb-d9114692b054-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"803118ee-9e61-4f5d-adeb-d9114692b054\") " pod="openstack/watcher-applier-0" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.322150 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/803118ee-9e61-4f5d-adeb-d9114692b054-logs\") pod \"watcher-applier-0\" (UID: \"803118ee-9e61-4f5d-adeb-d9114692b054\") " pod="openstack/watcher-applier-0" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.322165 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/803118ee-9e61-4f5d-adeb-d9114692b054-config-data\") pod \"watcher-applier-0\" (UID: \"803118ee-9e61-4f5d-adeb-d9114692b054\") " pod="openstack/watcher-applier-0" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.322188 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbz8k\" (UniqueName: \"kubernetes.io/projected/01b27ce5-269a-4d59-b7bf-51a805357d4a-kube-api-access-rbz8k\") pod \"watcher-api-0\" (UID: \"01b27ce5-269a-4d59-b7bf-51a805357d4a\") " pod="openstack/watcher-api-0" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.322210 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/01b27ce5-269a-4d59-b7bf-51a805357d4a-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"01b27ce5-269a-4d59-b7bf-51a805357d4a\") " pod="openstack/watcher-api-0" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.322403 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01b27ce5-269a-4d59-b7bf-51a805357d4a-logs\") pod \"watcher-api-0\" (UID: \"01b27ce5-269a-4d59-b7bf-51a805357d4a\") " pod="openstack/watcher-api-0" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.322979 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/803118ee-9e61-4f5d-adeb-d9114692b054-logs\") pod \"watcher-applier-0\" (UID: \"803118ee-9e61-4f5d-adeb-d9114692b054\") " pod="openstack/watcher-applier-0" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.326416 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/01b27ce5-269a-4d59-b7bf-51a805357d4a-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"01b27ce5-269a-4d59-b7bf-51a805357d4a\") " pod="openstack/watcher-api-0" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.327403 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/803118ee-9e61-4f5d-adeb-d9114692b054-config-data\") pod \"watcher-applier-0\" (UID: \"803118ee-9e61-4f5d-adeb-d9114692b054\") " pod="openstack/watcher-applier-0" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.337536 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/803118ee-9e61-4f5d-adeb-d9114692b054-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"803118ee-9e61-4f5d-adeb-d9114692b054\") " pod="openstack/watcher-applier-0" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.337739 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01b27ce5-269a-4d59-b7bf-51a805357d4a-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"01b27ce5-269a-4d59-b7bf-51a805357d4a\") " pod="openstack/watcher-api-0" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.338784 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zglsh\" (UniqueName: \"kubernetes.io/projected/803118ee-9e61-4f5d-adeb-d9114692b054-kube-api-access-zglsh\") pod \"watcher-applier-0\" (UID: \"803118ee-9e61-4f5d-adeb-d9114692b054\") " pod="openstack/watcher-applier-0" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.339261 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01b27ce5-269a-4d59-b7bf-51a805357d4a-config-data\") pod \"watcher-api-0\" (UID: \"01b27ce5-269a-4d59-b7bf-51a805357d4a\") " pod="openstack/watcher-api-0" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.340199 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbz8k\" (UniqueName: \"kubernetes.io/projected/01b27ce5-269a-4d59-b7bf-51a805357d4a-kube-api-access-rbz8k\") pod \"watcher-api-0\" (UID: \"01b27ce5-269a-4d59-b7bf-51a805357d4a\") " pod="openstack/watcher-api-0" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.374633 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.387286 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.648764 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-946dbfbcf-dtqkt" podUID="194c60dd-8bd6-45e5-9e65-62efa4215dd9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: i/o timeout" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.823108 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="b0c65577-767b-4bda-b56d-ac570e6cdbcf" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.161:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.981579 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6dccb4fc9c-xqbgq"] Jan 22 10:44:36 crc kubenswrapper[4752]: I0122 10:44:36.983021 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dccb4fc9c-xqbgq" Jan 22 10:44:37 crc kubenswrapper[4752]: I0122 10:44:37.015999 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6dccb4fc9c-xqbgq"] Jan 22 10:44:37 crc kubenswrapper[4752]: I0122 10:44:37.138617 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03c5e4ed-4c27-447e-b8ea-8853f84742e3" path="/var/lib/kubelet/pods/03c5e4ed-4c27-447e-b8ea-8853f84742e3/volumes" Jan 22 10:44:37 crc kubenswrapper[4752]: I0122 10:44:37.139751 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="194c60dd-8bd6-45e5-9e65-62efa4215dd9" path="/var/lib/kubelet/pods/194c60dd-8bd6-45e5-9e65-62efa4215dd9/volumes" Jan 22 10:44:37 crc kubenswrapper[4752]: I0122 10:44:37.140642 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0c65577-767b-4bda-b56d-ac570e6cdbcf" path="/var/lib/kubelet/pods/b0c65577-767b-4bda-b56d-ac570e6cdbcf/volumes" Jan 22 10:44:37 crc kubenswrapper[4752]: I0122 10:44:37.142595 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8778431-6322-4c5f-a77b-5c8d051c10d4-dns-svc\") pod \"dnsmasq-dns-6dccb4fc9c-xqbgq\" (UID: \"e8778431-6322-4c5f-a77b-5c8d051c10d4\") " pod="openstack/dnsmasq-dns-6dccb4fc9c-xqbgq" Jan 22 10:44:37 crc kubenswrapper[4752]: I0122 10:44:37.142664 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htrm2\" (UniqueName: \"kubernetes.io/projected/e8778431-6322-4c5f-a77b-5c8d051c10d4-kube-api-access-htrm2\") pod \"dnsmasq-dns-6dccb4fc9c-xqbgq\" (UID: \"e8778431-6322-4c5f-a77b-5c8d051c10d4\") " pod="openstack/dnsmasq-dns-6dccb4fc9c-xqbgq" Jan 22 10:44:37 crc kubenswrapper[4752]: I0122 10:44:37.142703 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8778431-6322-4c5f-a77b-5c8d051c10d4-ovsdbserver-nb\") pod \"dnsmasq-dns-6dccb4fc9c-xqbgq\" (UID: \"e8778431-6322-4c5f-a77b-5c8d051c10d4\") " pod="openstack/dnsmasq-dns-6dccb4fc9c-xqbgq" Jan 22 10:44:37 crc kubenswrapper[4752]: I0122 10:44:37.142742 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8778431-6322-4c5f-a77b-5c8d051c10d4-dns-swift-storage-0\") pod \"dnsmasq-dns-6dccb4fc9c-xqbgq\" (UID: \"e8778431-6322-4c5f-a77b-5c8d051c10d4\") " pod="openstack/dnsmasq-dns-6dccb4fc9c-xqbgq" Jan 22 10:44:37 crc kubenswrapper[4752]: I0122 10:44:37.142788 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8778431-6322-4c5f-a77b-5c8d051c10d4-config\") pod \"dnsmasq-dns-6dccb4fc9c-xqbgq\" (UID: \"e8778431-6322-4c5f-a77b-5c8d051c10d4\") " pod="openstack/dnsmasq-dns-6dccb4fc9c-xqbgq" Jan 22 10:44:37 crc kubenswrapper[4752]: I0122 10:44:37.142806 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8778431-6322-4c5f-a77b-5c8d051c10d4-ovsdbserver-sb\") pod \"dnsmasq-dns-6dccb4fc9c-xqbgq\" (UID: \"e8778431-6322-4c5f-a77b-5c8d051c10d4\") " pod="openstack/dnsmasq-dns-6dccb4fc9c-xqbgq" Jan 22 10:44:37 crc kubenswrapper[4752]: I0122 10:44:37.244399 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8778431-6322-4c5f-a77b-5c8d051c10d4-config\") pod \"dnsmasq-dns-6dccb4fc9c-xqbgq\" (UID: \"e8778431-6322-4c5f-a77b-5c8d051c10d4\") " pod="openstack/dnsmasq-dns-6dccb4fc9c-xqbgq" Jan 22 10:44:37 crc kubenswrapper[4752]: I0122 10:44:37.244452 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8778431-6322-4c5f-a77b-5c8d051c10d4-ovsdbserver-sb\") pod \"dnsmasq-dns-6dccb4fc9c-xqbgq\" (UID: \"e8778431-6322-4c5f-a77b-5c8d051c10d4\") " pod="openstack/dnsmasq-dns-6dccb4fc9c-xqbgq" Jan 22 10:44:37 crc kubenswrapper[4752]: I0122 10:44:37.244492 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8778431-6322-4c5f-a77b-5c8d051c10d4-dns-svc\") pod \"dnsmasq-dns-6dccb4fc9c-xqbgq\" (UID: \"e8778431-6322-4c5f-a77b-5c8d051c10d4\") " pod="openstack/dnsmasq-dns-6dccb4fc9c-xqbgq" Jan 22 10:44:37 crc kubenswrapper[4752]: I0122 10:44:37.244670 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htrm2\" (UniqueName: \"kubernetes.io/projected/e8778431-6322-4c5f-a77b-5c8d051c10d4-kube-api-access-htrm2\") pod \"dnsmasq-dns-6dccb4fc9c-xqbgq\" (UID: \"e8778431-6322-4c5f-a77b-5c8d051c10d4\") " pod="openstack/dnsmasq-dns-6dccb4fc9c-xqbgq" Jan 22 10:44:37 crc kubenswrapper[4752]: I0122 10:44:37.244875 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8778431-6322-4c5f-a77b-5c8d051c10d4-ovsdbserver-nb\") pod \"dnsmasq-dns-6dccb4fc9c-xqbgq\" (UID: \"e8778431-6322-4c5f-a77b-5c8d051c10d4\") " pod="openstack/dnsmasq-dns-6dccb4fc9c-xqbgq" Jan 22 10:44:37 crc kubenswrapper[4752]: I0122 10:44:37.244975 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8778431-6322-4c5f-a77b-5c8d051c10d4-dns-swift-storage-0\") pod \"dnsmasq-dns-6dccb4fc9c-xqbgq\" (UID: \"e8778431-6322-4c5f-a77b-5c8d051c10d4\") " pod="openstack/dnsmasq-dns-6dccb4fc9c-xqbgq" Jan 22 10:44:37 crc kubenswrapper[4752]: I0122 10:44:37.245517 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8778431-6322-4c5f-a77b-5c8d051c10d4-config\") pod \"dnsmasq-dns-6dccb4fc9c-xqbgq\" (UID: \"e8778431-6322-4c5f-a77b-5c8d051c10d4\") " pod="openstack/dnsmasq-dns-6dccb4fc9c-xqbgq" Jan 22 10:44:37 crc kubenswrapper[4752]: I0122 10:44:37.245528 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8778431-6322-4c5f-a77b-5c8d051c10d4-ovsdbserver-nb\") pod \"dnsmasq-dns-6dccb4fc9c-xqbgq\" (UID: \"e8778431-6322-4c5f-a77b-5c8d051c10d4\") " pod="openstack/dnsmasq-dns-6dccb4fc9c-xqbgq" Jan 22 10:44:37 crc kubenswrapper[4752]: I0122 10:44:37.245522 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8778431-6322-4c5f-a77b-5c8d051c10d4-dns-svc\") pod \"dnsmasq-dns-6dccb4fc9c-xqbgq\" (UID: \"e8778431-6322-4c5f-a77b-5c8d051c10d4\") " pod="openstack/dnsmasq-dns-6dccb4fc9c-xqbgq" Jan 22 10:44:37 crc kubenswrapper[4752]: I0122 10:44:37.245556 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8778431-6322-4c5f-a77b-5c8d051c10d4-ovsdbserver-sb\") pod \"dnsmasq-dns-6dccb4fc9c-xqbgq\" (UID: \"e8778431-6322-4c5f-a77b-5c8d051c10d4\") " pod="openstack/dnsmasq-dns-6dccb4fc9c-xqbgq" Jan 22 10:44:37 crc kubenswrapper[4752]: I0122 10:44:37.248080 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8778431-6322-4c5f-a77b-5c8d051c10d4-dns-swift-storage-0\") pod \"dnsmasq-dns-6dccb4fc9c-xqbgq\" (UID: \"e8778431-6322-4c5f-a77b-5c8d051c10d4\") " pod="openstack/dnsmasq-dns-6dccb4fc9c-xqbgq" Jan 22 10:44:37 crc kubenswrapper[4752]: I0122 10:44:37.263118 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htrm2\" (UniqueName: \"kubernetes.io/projected/e8778431-6322-4c5f-a77b-5c8d051c10d4-kube-api-access-htrm2\") pod \"dnsmasq-dns-6dccb4fc9c-xqbgq\" (UID: \"e8778431-6322-4c5f-a77b-5c8d051c10d4\") " pod="openstack/dnsmasq-dns-6dccb4fc9c-xqbgq" Jan 22 10:44:37 crc kubenswrapper[4752]: I0122 10:44:37.335372 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dccb4fc9c-xqbgq" Jan 22 10:44:37 crc kubenswrapper[4752]: E0122 10:44:37.353236 4752 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.32:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Jan 22 10:44:37 crc kubenswrapper[4752]: E0122 10:44:37.353294 4752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.32:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Jan 22 10:44:37 crc kubenswrapper[4752]: E0122 10:44:37.353426 4752 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:38.102.83.32:5001/podified-master-centos10/openstack-cinder-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9btpv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-6vrvk_openstack(e48968c0-ac21-49af-9161-19bf5e37c9eb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 10:44:37 crc kubenswrapper[4752]: E0122 10:44:37.354632 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-6vrvk" podUID="e48968c0-ac21-49af-9161-19bf5e37c9eb" Jan 22 10:44:37 crc kubenswrapper[4752]: E0122 10:44:37.648919 4752 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.32:5001/podified-master-centos10/openstack-ceilometer-notification:watcher_latest" Jan 22 10:44:37 crc kubenswrapper[4752]: E0122 10:44:37.648967 4752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.32:5001/podified-master-centos10/openstack-ceilometer-notification:watcher_latest" Jan 22 10:44:37 crc kubenswrapper[4752]: E0122 10:44:37.649085 4752 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-notification-agent,Image:38.102.83.32:5001/podified-master-centos10/openstack-ceilometer-notification:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n684h64ch9bh656h5c4h76h645h544h699h64h65ch5f6h5fbh586h85hd4h5b7h649h5dch569h59ch6h5chd9h654hc4h65bh58bhbbh69hb8hb9q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-notification-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4tqn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/notificationhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(fd370da5-83df-42ba-a822-7cff763d174b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 10:44:37 crc kubenswrapper[4752]: E0122 10:44:37.660999 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.32:5001/podified-master-centos10/openstack-cinder-api:watcher_latest\\\"\"" pod="openstack/cinder-db-sync-6vrvk" podUID="e48968c0-ac21-49af-9161-19bf5e37c9eb" Jan 22 10:44:37 crc kubenswrapper[4752]: I0122 10:44:37.843460 4752 scope.go:117] "RemoveContainer" containerID="1227fa555540dda5b67b93c99e723f990358ced7bce6284e47986cb91db3f954" Jan 22 10:44:37 crc kubenswrapper[4752]: I0122 10:44:37.903637 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 10:44:37 crc kubenswrapper[4752]: I0122 10:44:37.905732 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 10:44:37 crc kubenswrapper[4752]: I0122 10:44:37.910789 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 22 10:44:37 crc kubenswrapper[4752]: I0122 10:44:37.910952 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 22 10:44:37 crc kubenswrapper[4752]: I0122 10:44:37.911264 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-7npjg" Jan 22 10:44:37 crc kubenswrapper[4752]: I0122 10:44:37.940395 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.045328 4752 scope.go:117] "RemoveContainer" containerID="51a06c2da658a12c1580e63464f7c470e78ddc007ff53b92684d1360f1395e31" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.061483 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.061535 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-scripts\") pod \"glance-default-external-api-0\" (UID: \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.061555 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.061628 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.061719 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-logs\") pod \"glance-default-external-api-0\" (UID: \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.061756 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-config-data\") pod \"glance-default-external-api-0\" (UID: \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.061798 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrfd7\" (UniqueName: \"kubernetes.io/projected/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-kube-api-access-qrfd7\") pod \"glance-default-external-api-0\" (UID: \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.149047 4752 scope.go:117] "RemoveContainer" containerID="7732ff5fdf24ffdd50eb12e7705aaf100e410b5c70861102478c1341f0a2eec4" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.152208 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.156407 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.163021 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.165600 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.168313 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrfd7\" (UniqueName: \"kubernetes.io/projected/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-kube-api-access-qrfd7\") pod \"glance-default-external-api-0\" (UID: \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.168697 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.168748 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-scripts\") pod \"glance-default-external-api-0\" (UID: \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.168772 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.168891 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.168970 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-logs\") pod \"glance-default-external-api-0\" (UID: \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.169023 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-config-data\") pod \"glance-default-external-api-0\" (UID: \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.170224 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-logs\") pod \"glance-default-external-api-0\" (UID: \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.170516 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.170587 4752 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.182376 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-scripts\") pod \"glance-default-external-api-0\" (UID: \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.187413 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-config-data\") pod \"glance-default-external-api-0\" (UID: \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.195966 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.203387 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.203616 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrfd7\" (UniqueName: \"kubernetes.io/projected/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-kube-api-access-qrfd7\") pod \"glance-default-external-api-0\" (UID: \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.241292 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.261944 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7c77556c9d-7cqmw"] Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.270492 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b32cf282-0e10-4246-86e4-e41965f79a2b-logs\") pod \"glance-default-internal-api-0\" (UID: \"b32cf282-0e10-4246-86e4-e41965f79a2b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.270607 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b32cf282-0e10-4246-86e4-e41965f79a2b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b32cf282-0e10-4246-86e4-e41965f79a2b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.270655 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b32cf282-0e10-4246-86e4-e41965f79a2b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b32cf282-0e10-4246-86e4-e41965f79a2b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.270680 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnpjw\" (UniqueName: \"kubernetes.io/projected/b32cf282-0e10-4246-86e4-e41965f79a2b-kube-api-access-xnpjw\") pod \"glance-default-internal-api-0\" (UID: \"b32cf282-0e10-4246-86e4-e41965f79a2b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.270713 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"b32cf282-0e10-4246-86e4-e41965f79a2b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.270753 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b32cf282-0e10-4246-86e4-e41965f79a2b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b32cf282-0e10-4246-86e4-e41965f79a2b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.270807 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b32cf282-0e10-4246-86e4-e41965f79a2b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b32cf282-0e10-4246-86e4-e41965f79a2b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.328772 4752 scope.go:117] "RemoveContainer" containerID="cb986fc95320ce76b7409a4e3ae4360b86d44aab9c3fa34e41cb7ebff06433a0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.372622 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b32cf282-0e10-4246-86e4-e41965f79a2b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b32cf282-0e10-4246-86e4-e41965f79a2b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.372754 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b32cf282-0e10-4246-86e4-e41965f79a2b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b32cf282-0e10-4246-86e4-e41965f79a2b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.372849 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b32cf282-0e10-4246-86e4-e41965f79a2b-logs\") pod \"glance-default-internal-api-0\" (UID: \"b32cf282-0e10-4246-86e4-e41965f79a2b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.372960 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b32cf282-0e10-4246-86e4-e41965f79a2b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b32cf282-0e10-4246-86e4-e41965f79a2b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.373006 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b32cf282-0e10-4246-86e4-e41965f79a2b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b32cf282-0e10-4246-86e4-e41965f79a2b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.373029 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnpjw\" (UniqueName: \"kubernetes.io/projected/b32cf282-0e10-4246-86e4-e41965f79a2b-kube-api-access-xnpjw\") pod \"glance-default-internal-api-0\" (UID: \"b32cf282-0e10-4246-86e4-e41965f79a2b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.373061 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"b32cf282-0e10-4246-86e4-e41965f79a2b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.373428 4752 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"b32cf282-0e10-4246-86e4-e41965f79a2b\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.381722 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b32cf282-0e10-4246-86e4-e41965f79a2b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b32cf282-0e10-4246-86e4-e41965f79a2b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.384745 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b32cf282-0e10-4246-86e4-e41965f79a2b-logs\") pod \"glance-default-internal-api-0\" (UID: \"b32cf282-0e10-4246-86e4-e41965f79a2b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.386096 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b32cf282-0e10-4246-86e4-e41965f79a2b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b32cf282-0e10-4246-86e4-e41965f79a2b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.388252 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b32cf282-0e10-4246-86e4-e41965f79a2b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b32cf282-0e10-4246-86e4-e41965f79a2b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.420623 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b32cf282-0e10-4246-86e4-e41965f79a2b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b32cf282-0e10-4246-86e4-e41965f79a2b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.422448 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnpjw\" (UniqueName: \"kubernetes.io/projected/b32cf282-0e10-4246-86e4-e41965f79a2b-kube-api-access-xnpjw\") pod \"glance-default-internal-api-0\" (UID: \"b32cf282-0e10-4246-86e4-e41965f79a2b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.434953 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hx4lb"] Jan 22 10:44:38 crc kubenswrapper[4752]: W0122 10:44:38.452036 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46dd30e3_47fd_4bd6_934e_cd7ac88e4f5b.slice/crio-9bcca514dbafd32a7e9f5d2f02ff9d4dde813384969963fc884b000df8178b3a WatchSource:0}: Error finding container 9bcca514dbafd32a7e9f5d2f02ff9d4dde813384969963fc884b000df8178b3a: Status 404 returned error can't find the container with id 9bcca514dbafd32a7e9f5d2f02ff9d4dde813384969963fc884b000df8178b3a Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.459219 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"b32cf282-0e10-4246-86e4-e41965f79a2b\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.519222 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.530709 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.532679 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 10:44:38 crc kubenswrapper[4752]: W0122 10:44:38.572370 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod803118ee_9e61_4f5d_adeb_d9114692b054.slice/crio-9a9be3cfae96d7add9ce40879fac7cf68643a9898cf3891dfa8a4842173129fb WatchSource:0}: Error finding container 9a9be3cfae96d7add9ce40879fac7cf68643a9898cf3891dfa8a4842173129fb: Status 404 returned error can't find the container with id 9a9be3cfae96d7add9ce40879fac7cf68643a9898cf3891dfa8a4842173129fb Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.694989 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"01b27ce5-269a-4d59-b7bf-51a805357d4a","Type":"ContainerStarted","Data":"59226a995de447449b63c456140ebf3944c38fa7acfc93169db76886af506c4d"} Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.697867 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"35cf59e4-f205-4f9f-90ba-358c0fb38048","Type":"ContainerStarted","Data":"c7d8e5efb1962d3e1ba0c8089d296aa795cf9bb7df68be1a34846f0b1dd77e8e"} Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.730038 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6dccb4fc9c-xqbgq"] Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.733149 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hx4lb" event={"ID":"46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b","Type":"ContainerStarted","Data":"9bcca514dbafd32a7e9f5d2f02ff9d4dde813384969963fc884b000df8178b3a"} Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.745719 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"803118ee-9e61-4f5d-adeb-d9114692b054","Type":"ContainerStarted","Data":"9a9be3cfae96d7add9ce40879fac7cf68643a9898cf3891dfa8a4842173129fb"} Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.747672 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c77556c9d-7cqmw" event={"ID":"5ba449ad-098c-4918-9403-750b0c29ee93","Type":"ContainerStarted","Data":"0023b916730ca1dad989f345a5d322c8c147416c8178351ba61b02aa9adbd265"} Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.761367 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56b8d5fdb8-7gp4n" event={"ID":"2af09aa4-9ce8-411f-8634-ac7eb7909555","Type":"ContainerStarted","Data":"aa19190176bd32fa92522b8dc91e6c2ba4c579cce74acec5e31d59b46d485c04"} Jan 22 10:44:38 crc kubenswrapper[4752]: W0122 10:44:38.788056 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8778431_6322_4c5f_a77b_5c8d051c10d4.slice/crio-b23920bf7171d7679fa41c43dd057e70fa7a346a4892b5739b1bc3f17cd86daa WatchSource:0}: Error finding container b23920bf7171d7679fa41c43dd057e70fa7a346a4892b5739b1bc3f17cd86daa: Status 404 returned error can't find the container with id b23920bf7171d7679fa41c43dd057e70fa7a346a4892b5739b1bc3f17cd86daa Jan 22 10:44:38 crc kubenswrapper[4752]: I0122 10:44:38.928519 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 10:44:39 crc kubenswrapper[4752]: I0122 10:44:39.758836 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 10:44:39 crc kubenswrapper[4752]: I0122 10:44:39.783253 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4d1bf5cd-3912-407c-9bfc-e17784ac0cde","Type":"ContainerStarted","Data":"33c08c473af19adf552a323bbf8e2462236bee2d3c34d4eed1c1156f1852a3d3"} Jan 22 10:44:39 crc kubenswrapper[4752]: I0122 10:44:39.786757 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6864464dd5-tktm8" event={"ID":"28a90e4a-ca62-4bd6-bfee-29cfda8b7478","Type":"ContainerStarted","Data":"f680d614c0afd3b21a8db8e64421b7f550e74cf26cf9122c1c2dcfe05b10ed80"} Jan 22 10:44:39 crc kubenswrapper[4752]: I0122 10:44:39.795671 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dccb4fc9c-xqbgq" event={"ID":"e8778431-6322-4c5f-a77b-5c8d051c10d4","Type":"ContainerStarted","Data":"b23920bf7171d7679fa41c43dd057e70fa7a346a4892b5739b1bc3f17cd86daa"} Jan 22 10:44:39 crc kubenswrapper[4752]: I0122 10:44:39.798549 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bznl9" event={"ID":"a8ef3108-d8e0-424d-be70-8bcab25d2c0b","Type":"ContainerStarted","Data":"f247efb91eabfe37e9509c27f0392b9f4e1adc74db40482c9c27e41d5d314581"} Jan 22 10:44:39 crc kubenswrapper[4752]: I0122 10:44:39.812345 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hx4lb" event={"ID":"46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b","Type":"ContainerStarted","Data":"72ee64ff4bceff348d88f1ce9e14116bf2c55f9807a82f7e142d4ef8da3ac678"} Jan 22 10:44:39 crc kubenswrapper[4752]: I0122 10:44:39.820737 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57957469d5-fx7bl" event={"ID":"3facab56-48f5-4f06-b879-86a9fb933537","Type":"ContainerStarted","Data":"d46e798bfc10137eb09aa25b2567d922c6b30720aa3745d474eff6798086f491"} Jan 22 10:44:39 crc kubenswrapper[4752]: I0122 10:44:39.828253 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56b8d5fdb8-7gp4n" event={"ID":"2af09aa4-9ce8-411f-8634-ac7eb7909555","Type":"ContainerStarted","Data":"07ace07d3b75e583cee737403212e798997f64f1b031463161c8f0a39fe713a0"} Jan 22 10:44:39 crc kubenswrapper[4752]: I0122 10:44:39.830687 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-bznl9" podStartSLOduration=8.260885382 podStartE2EDuration="39.830663255s" podCreationTimestamp="2026-01-22 10:44:00 +0000 UTC" firstStartedPulling="2026-01-22 10:44:06.583256066 +0000 UTC m=+1125.813198964" lastFinishedPulling="2026-01-22 10:44:38.153033929 +0000 UTC m=+1157.382976837" observedRunningTime="2026-01-22 10:44:39.827419594 +0000 UTC m=+1159.057362502" watchObservedRunningTime="2026-01-22 10:44:39.830663255 +0000 UTC m=+1159.060606173" Jan 22 10:44:39 crc kubenswrapper[4752]: I0122 10:44:39.864640 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"01b27ce5-269a-4d59-b7bf-51a805357d4a","Type":"ContainerStarted","Data":"6e1db621e927b9d34801b2b2e5e9205c41ba799cf6fb7841043f036d9a3df9f7"} Jan 22 10:44:39 crc kubenswrapper[4752]: I0122 10:44:39.880259 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-hx4lb" podStartSLOduration=15.880243764 podStartE2EDuration="15.880243764s" podCreationTimestamp="2026-01-22 10:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:44:39.847393453 +0000 UTC m=+1159.077336381" watchObservedRunningTime="2026-01-22 10:44:39.880243764 +0000 UTC m=+1159.110186672" Jan 22 10:44:39 crc kubenswrapper[4752]: I0122 10:44:39.888131 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qht9v" event={"ID":"d6b5da9f-e5d4-4629-89a9-1d215475d3bb","Type":"ContainerStarted","Data":"212946e0b17e97fef2a8b480531a5ebbe9538ec76317ca7954449d750273cf32"} Jan 22 10:44:39 crc kubenswrapper[4752]: I0122 10:44:39.905517 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c77556c9d-7cqmw" event={"ID":"5ba449ad-098c-4918-9403-750b0c29ee93","Type":"ContainerStarted","Data":"ee95aea4555325bf07e8ed3255ffe4d6d9d9f485d9c00e2707350ed5c6af4b29"} Jan 22 10:44:39 crc kubenswrapper[4752]: W0122 10:44:39.909827 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb32cf282_0e10_4246_86e4_e41965f79a2b.slice/crio-b81f5f904055271972a1bd6760550671c2b154e5e2129018c8c4fc554ae076a0 WatchSource:0}: Error finding container b81f5f904055271972a1bd6760550671c2b154e5e2129018c8c4fc554ae076a0: Status 404 returned error can't find the container with id b81f5f904055271972a1bd6760550671c2b154e5e2129018c8c4fc554ae076a0 Jan 22 10:44:39 crc kubenswrapper[4752]: I0122 10:44:39.911113 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-qht9v" podStartSLOduration=8.537919249 podStartE2EDuration="39.911098605s" podCreationTimestamp="2026-01-22 10:44:00 +0000 UTC" firstStartedPulling="2026-01-22 10:44:06.585622176 +0000 UTC m=+1125.815565084" lastFinishedPulling="2026-01-22 10:44:37.958801532 +0000 UTC m=+1157.188744440" observedRunningTime="2026-01-22 10:44:39.910181462 +0000 UTC m=+1159.140124370" watchObservedRunningTime="2026-01-22 10:44:39.911098605 +0000 UTC m=+1159.141041513" Jan 22 10:44:40 crc kubenswrapper[4752]: I0122 10:44:40.945579 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6864464dd5-tktm8" event={"ID":"28a90e4a-ca62-4bd6-bfee-29cfda8b7478","Type":"ContainerStarted","Data":"db9e7fc0f27aa6579f531cae42db36fd5a919ac9df8e49f6debefcd6af207b5d"} Jan 22 10:44:40 crc kubenswrapper[4752]: I0122 10:44:40.946525 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6864464dd5-tktm8" podUID="28a90e4a-ca62-4bd6-bfee-29cfda8b7478" containerName="horizon-log" containerID="cri-o://f680d614c0afd3b21a8db8e64421b7f550e74cf26cf9122c1c2dcfe05b10ed80" gracePeriod=30 Jan 22 10:44:40 crc kubenswrapper[4752]: I0122 10:44:40.946645 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6864464dd5-tktm8" podUID="28a90e4a-ca62-4bd6-bfee-29cfda8b7478" containerName="horizon" containerID="cri-o://db9e7fc0f27aa6579f531cae42db36fd5a919ac9df8e49f6debefcd6af207b5d" gracePeriod=30 Jan 22 10:44:40 crc kubenswrapper[4752]: I0122 10:44:40.962145 4752 generic.go:334] "Generic (PLEG): container finished" podID="e8778431-6322-4c5f-a77b-5c8d051c10d4" containerID="b2e7bad9402454b1afb8873f1fbd5a7aed6e3dc42c5f64513faa1a26f2fde0a7" exitCode=0 Jan 22 10:44:40 crc kubenswrapper[4752]: I0122 10:44:40.962535 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dccb4fc9c-xqbgq" event={"ID":"e8778431-6322-4c5f-a77b-5c8d051c10d4","Type":"ContainerDied","Data":"b2e7bad9402454b1afb8873f1fbd5a7aed6e3dc42c5f64513faa1a26f2fde0a7"} Jan 22 10:44:40 crc kubenswrapper[4752]: I0122 10:44:40.977282 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56b8d5fdb8-7gp4n" event={"ID":"2af09aa4-9ce8-411f-8634-ac7eb7909555","Type":"ContainerStarted","Data":"e572d4694d550b75d9f5ddfba687c2f5acb5ef0801e3aa0bd799c910e6e5be75"} Jan 22 10:44:40 crc kubenswrapper[4752]: I0122 10:44:40.982289 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 22 10:44:40 crc kubenswrapper[4752]: I0122 10:44:40.999751 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"803118ee-9e61-4f5d-adeb-d9114692b054","Type":"ContainerStarted","Data":"44c9e0c2405b5012cfc2b2be771202d0da8df9f65acff38249343ae3748e1308"} Jan 22 10:44:41 crc kubenswrapper[4752]: I0122 10:44:41.003230 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6864464dd5-tktm8" podStartSLOduration=9.807185704 podStartE2EDuration="41.003205051s" podCreationTimestamp="2026-01-22 10:44:00 +0000 UTC" firstStartedPulling="2026-01-22 10:44:06.652472817 +0000 UTC m=+1125.882415725" lastFinishedPulling="2026-01-22 10:44:37.848492164 +0000 UTC m=+1157.078435072" observedRunningTime="2026-01-22 10:44:40.986458212 +0000 UTC m=+1160.216401120" watchObservedRunningTime="2026-01-22 10:44:41.003205051 +0000 UTC m=+1160.233147959" Jan 22 10:44:41 crc kubenswrapper[4752]: I0122 10:44:41.071668 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c77556c9d-7cqmw" event={"ID":"5ba449ad-098c-4918-9403-750b0c29ee93","Type":"ContainerStarted","Data":"9aadbd246544b4c33e87afdab4b26ff2bfd1c7d8c83899d5519375d8b2cfa6d6"} Jan 22 10:44:41 crc kubenswrapper[4752]: I0122 10:44:41.082707 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-56b8d5fdb8-7gp4n" podStartSLOduration=32.082690618 podStartE2EDuration="32.082690618s" podCreationTimestamp="2026-01-22 10:44:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:44:41.052585486 +0000 UTC m=+1160.282528394" watchObservedRunningTime="2026-01-22 10:44:41.082690618 +0000 UTC m=+1160.312633526" Jan 22 10:44:41 crc kubenswrapper[4752]: I0122 10:44:41.087310 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 22 10:44:41 crc kubenswrapper[4752]: I0122 10:44:41.099787 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-57957469d5-fx7bl" podUID="3facab56-48f5-4f06-b879-86a9fb933537" containerName="horizon-log" containerID="cri-o://d46e798bfc10137eb09aa25b2567d922c6b30720aa3745d474eff6798086f491" gracePeriod=30 Jan 22 10:44:41 crc kubenswrapper[4752]: I0122 10:44:41.101131 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-57957469d5-fx7bl" podUID="3facab56-48f5-4f06-b879-86a9fb933537" containerName="horizon" containerID="cri-o://e934eb444bf8aa33dc12592d2ae0702f33194a403b7548f34e096e4c9f84873f" gracePeriod=30 Jan 22 10:44:41 crc kubenswrapper[4752]: I0122 10:44:41.176357 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=6.17632251 podStartE2EDuration="6.17632251s" podCreationTimestamp="2026-01-22 10:44:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:44:41.114116254 +0000 UTC m=+1160.344059182" watchObservedRunningTime="2026-01-22 10:44:41.17632251 +0000 UTC m=+1160.406265418" Jan 22 10:44:41 crc kubenswrapper[4752]: I0122 10:44:41.181151 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 22 10:44:41 crc kubenswrapper[4752]: I0122 10:44:41.181191 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 22 10:44:41 crc kubenswrapper[4752]: I0122 10:44:41.181203 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57957469d5-fx7bl" event={"ID":"3facab56-48f5-4f06-b879-86a9fb933537","Type":"ContainerStarted","Data":"e934eb444bf8aa33dc12592d2ae0702f33194a403b7548f34e096e4c9f84873f"} Jan 22 10:44:41 crc kubenswrapper[4752]: I0122 10:44:41.181222 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b32cf282-0e10-4246-86e4-e41965f79a2b","Type":"ContainerStarted","Data":"b81f5f904055271972a1bd6760550671c2b154e5e2129018c8c4fc554ae076a0"} Jan 22 10:44:41 crc kubenswrapper[4752]: I0122 10:44:41.181233 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"01b27ce5-269a-4d59-b7bf-51a805357d4a","Type":"ContainerStarted","Data":"a3ba8efb1a4e8e51e3a3fa08edb53da661c6de5134ab05d958bb1548b6572d84"} Jan 22 10:44:41 crc kubenswrapper[4752]: I0122 10:44:41.181243 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4d1bf5cd-3912-407c-9bfc-e17784ac0cde","Type":"ContainerStarted","Data":"c5dd3b4270eb09322131d49e7caded43ceefdbf3a70f763cfd5f7321ce71dc48"} Jan 22 10:44:41 crc kubenswrapper[4752]: I0122 10:44:41.197601 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7c77556c9d-7cqmw" podStartSLOduration=32.197575831 podStartE2EDuration="32.197575831s" podCreationTimestamp="2026-01-22 10:44:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:44:41.154694839 +0000 UTC m=+1160.384637757" watchObservedRunningTime="2026-01-22 10:44:41.197575831 +0000 UTC m=+1160.427518739" Jan 22 10:44:41 crc kubenswrapper[4752]: I0122 10:44:41.264621 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-57957469d5-fx7bl" podStartSLOduration=10.633274244 podStartE2EDuration="39.264599497s" podCreationTimestamp="2026-01-22 10:44:02 +0000 UTC" firstStartedPulling="2026-01-22 10:44:06.57540056 +0000 UTC m=+1125.805343468" lastFinishedPulling="2026-01-22 10:44:35.206725813 +0000 UTC m=+1154.436668721" observedRunningTime="2026-01-22 10:44:41.243641113 +0000 UTC m=+1160.473584011" watchObservedRunningTime="2026-01-22 10:44:41.264599497 +0000 UTC m=+1160.494542405" Jan 22 10:44:41 crc kubenswrapper[4752]: I0122 10:44:41.292100 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 22 10:44:41 crc kubenswrapper[4752]: I0122 10:44:41.349767 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=6.349736755 podStartE2EDuration="6.349736755s" podCreationTimestamp="2026-01-22 10:44:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:44:41.345750796 +0000 UTC m=+1160.575693704" watchObservedRunningTime="2026-01-22 10:44:41.349736755 +0000 UTC m=+1160.579679663" Jan 22 10:44:41 crc kubenswrapper[4752]: I0122 10:44:41.375136 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Jan 22 10:44:41 crc kubenswrapper[4752]: I0122 10:44:41.388718 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 22 10:44:41 crc kubenswrapper[4752]: I0122 10:44:41.418375 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 22 10:44:41 crc kubenswrapper[4752]: I0122 10:44:41.643130 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 10:44:41 crc kubenswrapper[4752]: I0122 10:44:41.720236 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6864464dd5-tktm8" Jan 22 10:44:41 crc kubenswrapper[4752]: I0122 10:44:41.727268 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 10:44:42 crc kubenswrapper[4752]: I0122 10:44:42.187291 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4d1bf5cd-3912-407c-9bfc-e17784ac0cde","Type":"ContainerStarted","Data":"7df246aa655f20ed16efcd1dd6ccbcf12705240800cc5b785fe7887163c51b80"} Jan 22 10:44:42 crc kubenswrapper[4752]: I0122 10:44:42.187443 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4d1bf5cd-3912-407c-9bfc-e17784ac0cde" containerName="glance-log" containerID="cri-o://c5dd3b4270eb09322131d49e7caded43ceefdbf3a70f763cfd5f7321ce71dc48" gracePeriod=30 Jan 22 10:44:42 crc kubenswrapper[4752]: I0122 10:44:42.187949 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4d1bf5cd-3912-407c-9bfc-e17784ac0cde" containerName="glance-httpd" containerID="cri-o://7df246aa655f20ed16efcd1dd6ccbcf12705240800cc5b785fe7887163c51b80" gracePeriod=30 Jan 22 10:44:42 crc kubenswrapper[4752]: I0122 10:44:42.194190 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dccb4fc9c-xqbgq" event={"ID":"e8778431-6322-4c5f-a77b-5c8d051c10d4","Type":"ContainerStarted","Data":"08cfc20dcbe820832edef01e28d6d809ed28e5cfe0bd056ffcaaa28d30a8e6af"} Jan 22 10:44:42 crc kubenswrapper[4752]: I0122 10:44:42.195012 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6dccb4fc9c-xqbgq" Jan 22 10:44:42 crc kubenswrapper[4752]: I0122 10:44:42.215003 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b32cf282-0e10-4246-86e4-e41965f79a2b","Type":"ContainerStarted","Data":"6ccf8cec2e9629cfd1d8feb309980aeb4412453d3ff0f9078679f7fe1dbf20e0"} Jan 22 10:44:42 crc kubenswrapper[4752]: I0122 10:44:42.216487 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.216472297 podStartE2EDuration="6.216472297s" podCreationTimestamp="2026-01-22 10:44:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:44:42.212280492 +0000 UTC m=+1161.442223410" watchObservedRunningTime="2026-01-22 10:44:42.216472297 +0000 UTC m=+1161.446415205" Jan 22 10:44:42 crc kubenswrapper[4752]: I0122 10:44:42.250947 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6dccb4fc9c-xqbgq" podStartSLOduration=6.250931438 podStartE2EDuration="6.250931438s" podCreationTimestamp="2026-01-22 10:44:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:44:42.250014645 +0000 UTC m=+1161.479957593" watchObservedRunningTime="2026-01-22 10:44:42.250931438 +0000 UTC m=+1161.480874346" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.211864 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-57957469d5-fx7bl" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.213179 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.253538 4752 generic.go:334] "Generic (PLEG): container finished" podID="4d1bf5cd-3912-407c-9bfc-e17784ac0cde" containerID="7df246aa655f20ed16efcd1dd6ccbcf12705240800cc5b785fe7887163c51b80" exitCode=143 Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.253569 4752 generic.go:334] "Generic (PLEG): container finished" podID="4d1bf5cd-3912-407c-9bfc-e17784ac0cde" containerID="c5dd3b4270eb09322131d49e7caded43ceefdbf3a70f763cfd5f7321ce71dc48" exitCode=143 Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.253615 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4d1bf5cd-3912-407c-9bfc-e17784ac0cde","Type":"ContainerDied","Data":"7df246aa655f20ed16efcd1dd6ccbcf12705240800cc5b785fe7887163c51b80"} Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.253642 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4d1bf5cd-3912-407c-9bfc-e17784ac0cde","Type":"ContainerDied","Data":"c5dd3b4270eb09322131d49e7caded43ceefdbf3a70f763cfd5f7321ce71dc48"} Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.253652 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4d1bf5cd-3912-407c-9bfc-e17784ac0cde","Type":"ContainerDied","Data":"33c08c473af19adf552a323bbf8e2462236bee2d3c34d4eed1c1156f1852a3d3"} Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.253669 4752 scope.go:117] "RemoveContainer" containerID="7df246aa655f20ed16efcd1dd6ccbcf12705240800cc5b785fe7887163c51b80" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.253819 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.256547 4752 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.256941 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b32cf282-0e10-4246-86e4-e41965f79a2b" containerName="glance-log" containerID="cri-o://6ccf8cec2e9629cfd1d8feb309980aeb4412453d3ff0f9078679f7fe1dbf20e0" gracePeriod=30 Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.257256 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b32cf282-0e10-4246-86e4-e41965f79a2b","Type":"ContainerStarted","Data":"c3ecdfbacbd71538331e6537ffaba64308798a7338867ed1eec73fa2aa25e4e0"} Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.258548 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="35cf59e4-f205-4f9f-90ba-358c0fb38048" containerName="watcher-decision-engine" containerID="cri-o://c7d8e5efb1962d3e1ba0c8089d296aa795cf9bb7df68be1a34846f0b1dd77e8e" gracePeriod=30 Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.258641 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b32cf282-0e10-4246-86e4-e41965f79a2b" containerName="glance-httpd" containerID="cri-o://c3ecdfbacbd71538331e6537ffaba64308798a7338867ed1eec73fa2aa25e4e0" gracePeriod=30 Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.285097 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-scripts\") pod \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\" (UID: \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\") " Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.285282 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-combined-ca-bundle\") pod \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\" (UID: \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\") " Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.285337 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\" (UID: \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\") " Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.285545 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-config-data\") pod \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\" (UID: \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\") " Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.285607 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrfd7\" (UniqueName: \"kubernetes.io/projected/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-kube-api-access-qrfd7\") pod \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\" (UID: \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\") " Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.285699 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-logs\") pod \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\" (UID: \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\") " Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.285753 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-httpd-run\") pod \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\" (UID: \"4d1bf5cd-3912-407c-9bfc-e17784ac0cde\") " Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.287995 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-logs" (OuterVolumeSpecName: "logs") pod "4d1bf5cd-3912-407c-9bfc-e17784ac0cde" (UID: "4d1bf5cd-3912-407c-9bfc-e17784ac0cde"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.288143 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4d1bf5cd-3912-407c-9bfc-e17784ac0cde" (UID: "4d1bf5cd-3912-407c-9bfc-e17784ac0cde"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.313277 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-scripts" (OuterVolumeSpecName: "scripts") pod "4d1bf5cd-3912-407c-9bfc-e17784ac0cde" (UID: "4d1bf5cd-3912-407c-9bfc-e17784ac0cde"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.314047 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "4d1bf5cd-3912-407c-9bfc-e17784ac0cde" (UID: "4d1bf5cd-3912-407c-9bfc-e17784ac0cde"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.315061 4752 scope.go:117] "RemoveContainer" containerID="c5dd3b4270eb09322131d49e7caded43ceefdbf3a70f763cfd5f7321ce71dc48" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.341112 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.341068505 podStartE2EDuration="6.341068505s" podCreationTimestamp="2026-01-22 10:44:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:44:43.315726891 +0000 UTC m=+1162.545669809" watchObservedRunningTime="2026-01-22 10:44:43.341068505 +0000 UTC m=+1162.571011413" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.356458 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-kube-api-access-qrfd7" (OuterVolumeSpecName: "kube-api-access-qrfd7") pod "4d1bf5cd-3912-407c-9bfc-e17784ac0cde" (UID: "4d1bf5cd-3912-407c-9bfc-e17784ac0cde"). InnerVolumeSpecName "kube-api-access-qrfd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.382992 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d1bf5cd-3912-407c-9bfc-e17784ac0cde" (UID: "4d1bf5cd-3912-407c-9bfc-e17784ac0cde"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.390064 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.390140 4752 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.390152 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrfd7\" (UniqueName: \"kubernetes.io/projected/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-kube-api-access-qrfd7\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.390168 4752 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.390177 4752 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.390186 4752 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.426168 4752 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.452149 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-config-data" (OuterVolumeSpecName: "config-data") pod "4d1bf5cd-3912-407c-9bfc-e17784ac0cde" (UID: "4d1bf5cd-3912-407c-9bfc-e17784ac0cde"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.492050 4752 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.492085 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d1bf5cd-3912-407c-9bfc-e17784ac0cde-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.550678 4752 scope.go:117] "RemoveContainer" containerID="7df246aa655f20ed16efcd1dd6ccbcf12705240800cc5b785fe7887163c51b80" Jan 22 10:44:43 crc kubenswrapper[4752]: E0122 10:44:43.551292 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7df246aa655f20ed16efcd1dd6ccbcf12705240800cc5b785fe7887163c51b80\": container with ID starting with 7df246aa655f20ed16efcd1dd6ccbcf12705240800cc5b785fe7887163c51b80 not found: ID does not exist" containerID="7df246aa655f20ed16efcd1dd6ccbcf12705240800cc5b785fe7887163c51b80" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.551350 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7df246aa655f20ed16efcd1dd6ccbcf12705240800cc5b785fe7887163c51b80"} err="failed to get container status \"7df246aa655f20ed16efcd1dd6ccbcf12705240800cc5b785fe7887163c51b80\": rpc error: code = NotFound desc = could not find container \"7df246aa655f20ed16efcd1dd6ccbcf12705240800cc5b785fe7887163c51b80\": container with ID starting with 7df246aa655f20ed16efcd1dd6ccbcf12705240800cc5b785fe7887163c51b80 not found: ID does not exist" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.551382 4752 scope.go:117] "RemoveContainer" containerID="c5dd3b4270eb09322131d49e7caded43ceefdbf3a70f763cfd5f7321ce71dc48" Jan 22 10:44:43 crc kubenswrapper[4752]: E0122 10:44:43.551813 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5dd3b4270eb09322131d49e7caded43ceefdbf3a70f763cfd5f7321ce71dc48\": container with ID starting with c5dd3b4270eb09322131d49e7caded43ceefdbf3a70f763cfd5f7321ce71dc48 not found: ID does not exist" containerID="c5dd3b4270eb09322131d49e7caded43ceefdbf3a70f763cfd5f7321ce71dc48" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.551840 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5dd3b4270eb09322131d49e7caded43ceefdbf3a70f763cfd5f7321ce71dc48"} err="failed to get container status \"c5dd3b4270eb09322131d49e7caded43ceefdbf3a70f763cfd5f7321ce71dc48\": rpc error: code = NotFound desc = could not find container \"c5dd3b4270eb09322131d49e7caded43ceefdbf3a70f763cfd5f7321ce71dc48\": container with ID starting with c5dd3b4270eb09322131d49e7caded43ceefdbf3a70f763cfd5f7321ce71dc48 not found: ID does not exist" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.551864 4752 scope.go:117] "RemoveContainer" containerID="7df246aa655f20ed16efcd1dd6ccbcf12705240800cc5b785fe7887163c51b80" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.552202 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7df246aa655f20ed16efcd1dd6ccbcf12705240800cc5b785fe7887163c51b80"} err="failed to get container status \"7df246aa655f20ed16efcd1dd6ccbcf12705240800cc5b785fe7887163c51b80\": rpc error: code = NotFound desc = could not find container \"7df246aa655f20ed16efcd1dd6ccbcf12705240800cc5b785fe7887163c51b80\": container with ID starting with 7df246aa655f20ed16efcd1dd6ccbcf12705240800cc5b785fe7887163c51b80 not found: ID does not exist" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.552225 4752 scope.go:117] "RemoveContainer" containerID="c5dd3b4270eb09322131d49e7caded43ceefdbf3a70f763cfd5f7321ce71dc48" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.552506 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5dd3b4270eb09322131d49e7caded43ceefdbf3a70f763cfd5f7321ce71dc48"} err="failed to get container status \"c5dd3b4270eb09322131d49e7caded43ceefdbf3a70f763cfd5f7321ce71dc48\": rpc error: code = NotFound desc = could not find container \"c5dd3b4270eb09322131d49e7caded43ceefdbf3a70f763cfd5f7321ce71dc48\": container with ID starting with c5dd3b4270eb09322131d49e7caded43ceefdbf3a70f763cfd5f7321ce71dc48 not found: ID does not exist" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.596924 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.662055 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.680503 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 10:44:43 crc kubenswrapper[4752]: E0122 10:44:43.681248 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d1bf5cd-3912-407c-9bfc-e17784ac0cde" containerName="glance-httpd" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.681277 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d1bf5cd-3912-407c-9bfc-e17784ac0cde" containerName="glance-httpd" Jan 22 10:44:43 crc kubenswrapper[4752]: E0122 10:44:43.681295 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d1bf5cd-3912-407c-9bfc-e17784ac0cde" containerName="glance-log" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.681303 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d1bf5cd-3912-407c-9bfc-e17784ac0cde" containerName="glance-log" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.681529 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d1bf5cd-3912-407c-9bfc-e17784ac0cde" containerName="glance-httpd" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.681553 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d1bf5cd-3912-407c-9bfc-e17784ac0cde" containerName="glance-log" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.686379 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.696766 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.697020 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.725216 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.803087 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/835169c7-e182-4674-adc7-18ef50e6a906-config-data\") pod \"glance-default-external-api-0\" (UID: \"835169c7-e182-4674-adc7-18ef50e6a906\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.803189 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/835169c7-e182-4674-adc7-18ef50e6a906-logs\") pod \"glance-default-external-api-0\" (UID: \"835169c7-e182-4674-adc7-18ef50e6a906\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.803229 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/835169c7-e182-4674-adc7-18ef50e6a906-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"835169c7-e182-4674-adc7-18ef50e6a906\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.803265 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/835169c7-e182-4674-adc7-18ef50e6a906-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"835169c7-e182-4674-adc7-18ef50e6a906\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.803304 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jglk\" (UniqueName: \"kubernetes.io/projected/835169c7-e182-4674-adc7-18ef50e6a906-kube-api-access-9jglk\") pod \"glance-default-external-api-0\" (UID: \"835169c7-e182-4674-adc7-18ef50e6a906\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.803342 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/835169c7-e182-4674-adc7-18ef50e6a906-scripts\") pod \"glance-default-external-api-0\" (UID: \"835169c7-e182-4674-adc7-18ef50e6a906\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.803374 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/835169c7-e182-4674-adc7-18ef50e6a906-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"835169c7-e182-4674-adc7-18ef50e6a906\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.803523 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"835169c7-e182-4674-adc7-18ef50e6a906\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.904958 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jglk\" (UniqueName: \"kubernetes.io/projected/835169c7-e182-4674-adc7-18ef50e6a906-kube-api-access-9jglk\") pod \"glance-default-external-api-0\" (UID: \"835169c7-e182-4674-adc7-18ef50e6a906\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.905021 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/835169c7-e182-4674-adc7-18ef50e6a906-scripts\") pod \"glance-default-external-api-0\" (UID: \"835169c7-e182-4674-adc7-18ef50e6a906\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.905052 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/835169c7-e182-4674-adc7-18ef50e6a906-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"835169c7-e182-4674-adc7-18ef50e6a906\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.905122 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"835169c7-e182-4674-adc7-18ef50e6a906\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.905157 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/835169c7-e182-4674-adc7-18ef50e6a906-config-data\") pod \"glance-default-external-api-0\" (UID: \"835169c7-e182-4674-adc7-18ef50e6a906\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.905205 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/835169c7-e182-4674-adc7-18ef50e6a906-logs\") pod \"glance-default-external-api-0\" (UID: \"835169c7-e182-4674-adc7-18ef50e6a906\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.905232 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/835169c7-e182-4674-adc7-18ef50e6a906-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"835169c7-e182-4674-adc7-18ef50e6a906\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.905257 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/835169c7-e182-4674-adc7-18ef50e6a906-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"835169c7-e182-4674-adc7-18ef50e6a906\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.905998 4752 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"835169c7-e182-4674-adc7-18ef50e6a906\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.906230 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/835169c7-e182-4674-adc7-18ef50e6a906-logs\") pod \"glance-default-external-api-0\" (UID: \"835169c7-e182-4674-adc7-18ef50e6a906\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.911652 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/835169c7-e182-4674-adc7-18ef50e6a906-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"835169c7-e182-4674-adc7-18ef50e6a906\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.911986 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/835169c7-e182-4674-adc7-18ef50e6a906-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"835169c7-e182-4674-adc7-18ef50e6a906\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.913249 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/835169c7-e182-4674-adc7-18ef50e6a906-config-data\") pod \"glance-default-external-api-0\" (UID: \"835169c7-e182-4674-adc7-18ef50e6a906\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.915177 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/835169c7-e182-4674-adc7-18ef50e6a906-scripts\") pod \"glance-default-external-api-0\" (UID: \"835169c7-e182-4674-adc7-18ef50e6a906\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.936935 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/835169c7-e182-4674-adc7-18ef50e6a906-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"835169c7-e182-4674-adc7-18ef50e6a906\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.939896 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jglk\" (UniqueName: \"kubernetes.io/projected/835169c7-e182-4674-adc7-18ef50e6a906-kube-api-access-9jglk\") pod \"glance-default-external-api-0\" (UID: \"835169c7-e182-4674-adc7-18ef50e6a906\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:43 crc kubenswrapper[4752]: I0122 10:44:43.984434 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"835169c7-e182-4674-adc7-18ef50e6a906\") " pod="openstack/glance-default-external-api-0" Jan 22 10:44:44 crc kubenswrapper[4752]: I0122 10:44:44.044597 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 10:44:44 crc kubenswrapper[4752]: I0122 10:44:44.270148 4752 generic.go:334] "Generic (PLEG): container finished" podID="b32cf282-0e10-4246-86e4-e41965f79a2b" containerID="c3ecdfbacbd71538331e6537ffaba64308798a7338867ed1eec73fa2aa25e4e0" exitCode=0 Jan 22 10:44:44 crc kubenswrapper[4752]: I0122 10:44:44.270615 4752 generic.go:334] "Generic (PLEG): container finished" podID="b32cf282-0e10-4246-86e4-e41965f79a2b" containerID="6ccf8cec2e9629cfd1d8feb309980aeb4412453d3ff0f9078679f7fe1dbf20e0" exitCode=143 Jan 22 10:44:44 crc kubenswrapper[4752]: I0122 10:44:44.270676 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b32cf282-0e10-4246-86e4-e41965f79a2b","Type":"ContainerDied","Data":"c3ecdfbacbd71538331e6537ffaba64308798a7338867ed1eec73fa2aa25e4e0"} Jan 22 10:44:44 crc kubenswrapper[4752]: I0122 10:44:44.270720 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b32cf282-0e10-4246-86e4-e41965f79a2b","Type":"ContainerDied","Data":"6ccf8cec2e9629cfd1d8feb309980aeb4412453d3ff0f9078679f7fe1dbf20e0"} Jan 22 10:44:44 crc kubenswrapper[4752]: I0122 10:44:44.743054 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 22 10:44:44 crc kubenswrapper[4752]: I0122 10:44:44.795288 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 10:44:44 crc kubenswrapper[4752]: W0122 10:44:44.802301 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod835169c7_e182_4674_adc7_18ef50e6a906.slice/crio-8bb66450a95cf59f9cdb359d23be5c1239b6ed0d61325f2a629a695c986baf1b WatchSource:0}: Error finding container 8bb66450a95cf59f9cdb359d23be5c1239b6ed0d61325f2a629a695c986baf1b: Status 404 returned error can't find the container with id 8bb66450a95cf59f9cdb359d23be5c1239b6ed0d61325f2a629a695c986baf1b Jan 22 10:44:45 crc kubenswrapper[4752]: I0122 10:44:45.124692 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d1bf5cd-3912-407c-9bfc-e17784ac0cde" path="/var/lib/kubelet/pods/4d1bf5cd-3912-407c-9bfc-e17784ac0cde/volumes" Jan 22 10:44:45 crc kubenswrapper[4752]: I0122 10:44:45.290467 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"835169c7-e182-4674-adc7-18ef50e6a906","Type":"ContainerStarted","Data":"8bb66450a95cf59f9cdb359d23be5c1239b6ed0d61325f2a629a695c986baf1b"} Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.044807 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.156704 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b32cf282-0e10-4246-86e4-e41965f79a2b-httpd-run\") pod \"b32cf282-0e10-4246-86e4-e41965f79a2b\" (UID: \"b32cf282-0e10-4246-86e4-e41965f79a2b\") " Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.156815 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b32cf282-0e10-4246-86e4-e41965f79a2b-scripts\") pod \"b32cf282-0e10-4246-86e4-e41965f79a2b\" (UID: \"b32cf282-0e10-4246-86e4-e41965f79a2b\") " Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.157318 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b32cf282-0e10-4246-86e4-e41965f79a2b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b32cf282-0e10-4246-86e4-e41965f79a2b" (UID: "b32cf282-0e10-4246-86e4-e41965f79a2b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.158095 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b32cf282-0e10-4246-86e4-e41965f79a2b-config-data\") pod \"b32cf282-0e10-4246-86e4-e41965f79a2b\" (UID: \"b32cf282-0e10-4246-86e4-e41965f79a2b\") " Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.158189 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnpjw\" (UniqueName: \"kubernetes.io/projected/b32cf282-0e10-4246-86e4-e41965f79a2b-kube-api-access-xnpjw\") pod \"b32cf282-0e10-4246-86e4-e41965f79a2b\" (UID: \"b32cf282-0e10-4246-86e4-e41965f79a2b\") " Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.158239 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"b32cf282-0e10-4246-86e4-e41965f79a2b\" (UID: \"b32cf282-0e10-4246-86e4-e41965f79a2b\") " Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.158268 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b32cf282-0e10-4246-86e4-e41965f79a2b-logs\") pod \"b32cf282-0e10-4246-86e4-e41965f79a2b\" (UID: \"b32cf282-0e10-4246-86e4-e41965f79a2b\") " Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.158299 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b32cf282-0e10-4246-86e4-e41965f79a2b-combined-ca-bundle\") pod \"b32cf282-0e10-4246-86e4-e41965f79a2b\" (UID: \"b32cf282-0e10-4246-86e4-e41965f79a2b\") " Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.158933 4752 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b32cf282-0e10-4246-86e4-e41965f79a2b-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.162753 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b32cf282-0e10-4246-86e4-e41965f79a2b-logs" (OuterVolumeSpecName: "logs") pod "b32cf282-0e10-4246-86e4-e41965f79a2b" (UID: "b32cf282-0e10-4246-86e4-e41965f79a2b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.164343 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b32cf282-0e10-4246-86e4-e41965f79a2b-scripts" (OuterVolumeSpecName: "scripts") pod "b32cf282-0e10-4246-86e4-e41965f79a2b" (UID: "b32cf282-0e10-4246-86e4-e41965f79a2b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.165036 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "b32cf282-0e10-4246-86e4-e41965f79a2b" (UID: "b32cf282-0e10-4246-86e4-e41965f79a2b"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.189088 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b32cf282-0e10-4246-86e4-e41965f79a2b-kube-api-access-xnpjw" (OuterVolumeSpecName: "kube-api-access-xnpjw") pod "b32cf282-0e10-4246-86e4-e41965f79a2b" (UID: "b32cf282-0e10-4246-86e4-e41965f79a2b"). InnerVolumeSpecName "kube-api-access-xnpjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.253002 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-d8nd5" podUID="08f8add3-b808-4eb1-a512-d955d30091ee" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.257043 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b32cf282-0e10-4246-86e4-e41965f79a2b-config-data" (OuterVolumeSpecName: "config-data") pod "b32cf282-0e10-4246-86e4-e41965f79a2b" (UID: "b32cf282-0e10-4246-86e4-e41965f79a2b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.260189 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b32cf282-0e10-4246-86e4-e41965f79a2b-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.260217 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnpjw\" (UniqueName: \"kubernetes.io/projected/b32cf282-0e10-4246-86e4-e41965f79a2b-kube-api-access-xnpjw\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.260238 4752 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.260247 4752 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b32cf282-0e10-4246-86e4-e41965f79a2b-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.260255 4752 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b32cf282-0e10-4246-86e4-e41965f79a2b-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.272959 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b32cf282-0e10-4246-86e4-e41965f79a2b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b32cf282-0e10-4246-86e4-e41965f79a2b" (UID: "b32cf282-0e10-4246-86e4-e41965f79a2b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.284383 4752 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.305840 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b32cf282-0e10-4246-86e4-e41965f79a2b","Type":"ContainerDied","Data":"b81f5f904055271972a1bd6760550671c2b154e5e2129018c8c4fc554ae076a0"} Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.305928 4752 scope.go:117] "RemoveContainer" containerID="c3ecdfbacbd71538331e6537ffaba64308798a7338867ed1eec73fa2aa25e4e0" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.305928 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.338375 4752 scope.go:117] "RemoveContainer" containerID="6ccf8cec2e9629cfd1d8feb309980aeb4412453d3ff0f9078679f7fe1dbf20e0" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.352826 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.362771 4752 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.362807 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b32cf282-0e10-4246-86e4-e41965f79a2b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.374698 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.374962 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.392143 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.405866 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 10:44:46 crc kubenswrapper[4752]: E0122 10:44:46.406772 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b32cf282-0e10-4246-86e4-e41965f79a2b" containerName="glance-log" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.406792 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="b32cf282-0e10-4246-86e4-e41965f79a2b" containerName="glance-log" Jan 22 10:44:46 crc kubenswrapper[4752]: E0122 10:44:46.406868 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b32cf282-0e10-4246-86e4-e41965f79a2b" containerName="glance-httpd" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.406883 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="b32cf282-0e10-4246-86e4-e41965f79a2b" containerName="glance-httpd" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.407251 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="b32cf282-0e10-4246-86e4-e41965f79a2b" containerName="glance-log" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.407269 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="b32cf282-0e10-4246-86e4-e41965f79a2b" containerName="glance-httpd" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.413376 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.419202 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.420390 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.426417 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.462256 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.469365 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.565877 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b42ddad-62b5-482e-b7e4-015f4e138979-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3b42ddad-62b5-482e-b7e4-015f4e138979\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.565952 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"3b42ddad-62b5-482e-b7e4-015f4e138979\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.566185 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b42ddad-62b5-482e-b7e4-015f4e138979-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3b42ddad-62b5-482e-b7e4-015f4e138979\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.566275 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b42ddad-62b5-482e-b7e4-015f4e138979-logs\") pod \"glance-default-internal-api-0\" (UID: \"3b42ddad-62b5-482e-b7e4-015f4e138979\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.566318 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48gmw\" (UniqueName: \"kubernetes.io/projected/3b42ddad-62b5-482e-b7e4-015f4e138979-kube-api-access-48gmw\") pod \"glance-default-internal-api-0\" (UID: \"3b42ddad-62b5-482e-b7e4-015f4e138979\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.566605 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b42ddad-62b5-482e-b7e4-015f4e138979-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3b42ddad-62b5-482e-b7e4-015f4e138979\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.566886 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b42ddad-62b5-482e-b7e4-015f4e138979-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3b42ddad-62b5-482e-b7e4-015f4e138979\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.567104 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3b42ddad-62b5-482e-b7e4-015f4e138979-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3b42ddad-62b5-482e-b7e4-015f4e138979\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.669143 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b42ddad-62b5-482e-b7e4-015f4e138979-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3b42ddad-62b5-482e-b7e4-015f4e138979\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.669385 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"3b42ddad-62b5-482e-b7e4-015f4e138979\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.669870 4752 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"3b42ddad-62b5-482e-b7e4-015f4e138979\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.669993 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b42ddad-62b5-482e-b7e4-015f4e138979-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3b42ddad-62b5-482e-b7e4-015f4e138979\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.670023 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b42ddad-62b5-482e-b7e4-015f4e138979-logs\") pod \"glance-default-internal-api-0\" (UID: \"3b42ddad-62b5-482e-b7e4-015f4e138979\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.670057 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48gmw\" (UniqueName: \"kubernetes.io/projected/3b42ddad-62b5-482e-b7e4-015f4e138979-kube-api-access-48gmw\") pod \"glance-default-internal-api-0\" (UID: \"3b42ddad-62b5-482e-b7e4-015f4e138979\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.670125 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b42ddad-62b5-482e-b7e4-015f4e138979-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3b42ddad-62b5-482e-b7e4-015f4e138979\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.670207 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b42ddad-62b5-482e-b7e4-015f4e138979-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3b42ddad-62b5-482e-b7e4-015f4e138979\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.670271 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3b42ddad-62b5-482e-b7e4-015f4e138979-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3b42ddad-62b5-482e-b7e4-015f4e138979\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.670546 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b42ddad-62b5-482e-b7e4-015f4e138979-logs\") pod \"glance-default-internal-api-0\" (UID: \"3b42ddad-62b5-482e-b7e4-015f4e138979\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.670682 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3b42ddad-62b5-482e-b7e4-015f4e138979-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3b42ddad-62b5-482e-b7e4-015f4e138979\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.673553 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b42ddad-62b5-482e-b7e4-015f4e138979-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3b42ddad-62b5-482e-b7e4-015f4e138979\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.677842 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b42ddad-62b5-482e-b7e4-015f4e138979-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3b42ddad-62b5-482e-b7e4-015f4e138979\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.680488 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b42ddad-62b5-482e-b7e4-015f4e138979-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3b42ddad-62b5-482e-b7e4-015f4e138979\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.683476 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b42ddad-62b5-482e-b7e4-015f4e138979-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3b42ddad-62b5-482e-b7e4-015f4e138979\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.691677 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48gmw\" (UniqueName: \"kubernetes.io/projected/3b42ddad-62b5-482e-b7e4-015f4e138979-kube-api-access-48gmw\") pod \"glance-default-internal-api-0\" (UID: \"3b42ddad-62b5-482e-b7e4-015f4e138979\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.713115 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"3b42ddad-62b5-482e-b7e4-015f4e138979\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:44:46 crc kubenswrapper[4752]: I0122 10:44:46.772600 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 10:44:47 crc kubenswrapper[4752]: I0122 10:44:47.108557 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b32cf282-0e10-4246-86e4-e41965f79a2b" path="/var/lib/kubelet/pods/b32cf282-0e10-4246-86e4-e41965f79a2b/volumes" Jan 22 10:44:47 crc kubenswrapper[4752]: I0122 10:44:47.320132 4752 generic.go:334] "Generic (PLEG): container finished" podID="35cf59e4-f205-4f9f-90ba-358c0fb38048" containerID="c7d8e5efb1962d3e1ba0c8089d296aa795cf9bb7df68be1a34846f0b1dd77e8e" exitCode=1 Jan 22 10:44:47 crc kubenswrapper[4752]: I0122 10:44:47.320186 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"35cf59e4-f205-4f9f-90ba-358c0fb38048","Type":"ContainerDied","Data":"c7d8e5efb1962d3e1ba0c8089d296aa795cf9bb7df68be1a34846f0b1dd77e8e"} Jan 22 10:44:47 crc kubenswrapper[4752]: I0122 10:44:47.320216 4752 scope.go:117] "RemoveContainer" containerID="d95138dcb4701bec8300195703111efe116428457cbd4fcc9d32122d90cb3b24" Jan 22 10:44:47 crc kubenswrapper[4752]: I0122 10:44:47.324215 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"835169c7-e182-4674-adc7-18ef50e6a906","Type":"ContainerStarted","Data":"b6a0ee8bc26c0c8f5f3d24e24a3ac8ab9adaaf385657ad0f4a59346291749cd3"} Jan 22 10:44:47 crc kubenswrapper[4752]: I0122 10:44:47.332206 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 22 10:44:47 crc kubenswrapper[4752]: I0122 10:44:47.339630 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6dccb4fc9c-xqbgq" Jan 22 10:44:47 crc kubenswrapper[4752]: I0122 10:44:47.405445 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Jan 22 10:44:47 crc kubenswrapper[4752]: I0122 10:44:47.428555 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dd959b98c-qv4gw"] Jan 22 10:44:47 crc kubenswrapper[4752]: I0122 10:44:47.428919 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-dd959b98c-qv4gw" podUID="b294f47a-d420-4cee-b974-315f75bb89d5" containerName="dnsmasq-dns" containerID="cri-o://b2208e23dcfebbca325be51b7daa39bc7d089105ec533c28e4bdb26559be9c63" gracePeriod=10 Jan 22 10:44:47 crc kubenswrapper[4752]: I0122 10:44:47.511708 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 10:44:48 crc kubenswrapper[4752]: I0122 10:44:48.338689 4752 generic.go:334] "Generic (PLEG): container finished" podID="b294f47a-d420-4cee-b974-315f75bb89d5" containerID="b2208e23dcfebbca325be51b7daa39bc7d089105ec533c28e4bdb26559be9c63" exitCode=0 Jan 22 10:44:48 crc kubenswrapper[4752]: I0122 10:44:48.339986 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dd959b98c-qv4gw" event={"ID":"b294f47a-d420-4cee-b974-315f75bb89d5","Type":"ContainerDied","Data":"b2208e23dcfebbca325be51b7daa39bc7d089105ec533c28e4bdb26559be9c63"} Jan 22 10:44:49 crc kubenswrapper[4752]: I0122 10:44:49.358595 4752 generic.go:334] "Generic (PLEG): container finished" podID="46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b" containerID="72ee64ff4bceff348d88f1ce9e14116bf2c55f9807a82f7e142d4ef8da3ac678" exitCode=0 Jan 22 10:44:49 crc kubenswrapper[4752]: I0122 10:44:49.358700 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hx4lb" event={"ID":"46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b","Type":"ContainerDied","Data":"72ee64ff4bceff348d88f1ce9e14116bf2c55f9807a82f7e142d4ef8da3ac678"} Jan 22 10:44:50 crc kubenswrapper[4752]: W0122 10:44:50.092658 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b42ddad_62b5_482e_b7e4_015f4e138979.slice/crio-38c930f5f292cb81617e1cab3271f8756ead170e88ed87ee41257189a0fe6115 WatchSource:0}: Error finding container 38c930f5f292cb81617e1cab3271f8756ead170e88ed87ee41257189a0fe6115: Status 404 returned error can't find the container with id 38c930f5f292cb81617e1cab3271f8756ead170e88ed87ee41257189a0fe6115 Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.162125 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7c77556c9d-7cqmw" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.162208 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7c77556c9d-7cqmw" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.164987 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7c77556c9d-7cqmw" podUID="5ba449ad-098c-4918-9403-750b0c29ee93" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.164:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.164:8443: connect: connection refused" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.220053 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.241243 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/35cf59e4-f205-4f9f-90ba-358c0fb38048-custom-prometheus-ca\") pod \"35cf59e4-f205-4f9f-90ba-358c0fb38048\" (UID: \"35cf59e4-f205-4f9f-90ba-358c0fb38048\") " Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.241435 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35cf59e4-f205-4f9f-90ba-358c0fb38048-logs\") pod \"35cf59e4-f205-4f9f-90ba-358c0fb38048\" (UID: \"35cf59e4-f205-4f9f-90ba-358c0fb38048\") " Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.241517 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35cf59e4-f205-4f9f-90ba-358c0fb38048-config-data\") pod \"35cf59e4-f205-4f9f-90ba-358c0fb38048\" (UID: \"35cf59e4-f205-4f9f-90ba-358c0fb38048\") " Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.241600 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35cf59e4-f205-4f9f-90ba-358c0fb38048-combined-ca-bundle\") pod \"35cf59e4-f205-4f9f-90ba-358c0fb38048\" (UID: \"35cf59e4-f205-4f9f-90ba-358c0fb38048\") " Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.241770 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv972\" (UniqueName: \"kubernetes.io/projected/35cf59e4-f205-4f9f-90ba-358c0fb38048-kube-api-access-kv972\") pod \"35cf59e4-f205-4f9f-90ba-358c0fb38048\" (UID: \"35cf59e4-f205-4f9f-90ba-358c0fb38048\") " Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.245214 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35cf59e4-f205-4f9f-90ba-358c0fb38048-logs" (OuterVolumeSpecName: "logs") pod "35cf59e4-f205-4f9f-90ba-358c0fb38048" (UID: "35cf59e4-f205-4f9f-90ba-358c0fb38048"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.256101 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35cf59e4-f205-4f9f-90ba-358c0fb38048-kube-api-access-kv972" (OuterVolumeSpecName: "kube-api-access-kv972") pod "35cf59e4-f205-4f9f-90ba-358c0fb38048" (UID: "35cf59e4-f205-4f9f-90ba-358c0fb38048"). InnerVolumeSpecName "kube-api-access-kv972". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.309370 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35cf59e4-f205-4f9f-90ba-358c0fb38048-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "35cf59e4-f205-4f9f-90ba-358c0fb38048" (UID: "35cf59e4-f205-4f9f-90ba-358c0fb38048"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.324029 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35cf59e4-f205-4f9f-90ba-358c0fb38048-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35cf59e4-f205-4f9f-90ba-358c0fb38048" (UID: "35cf59e4-f205-4f9f-90ba-358c0fb38048"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.347176 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kv972\" (UniqueName: \"kubernetes.io/projected/35cf59e4-f205-4f9f-90ba-358c0fb38048-kube-api-access-kv972\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.347207 4752 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/35cf59e4-f205-4f9f-90ba-358c0fb38048-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.347217 4752 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35cf59e4-f205-4f9f-90ba-358c0fb38048-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.347226 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35cf59e4-f205-4f9f-90ba-358c0fb38048-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.351427 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35cf59e4-f205-4f9f-90ba-358c0fb38048-config-data" (OuterVolumeSpecName: "config-data") pod "35cf59e4-f205-4f9f-90ba-358c0fb38048" (UID: "35cf59e4-f205-4f9f-90ba-358c0fb38048"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.354990 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-56b8d5fdb8-7gp4n" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.355039 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-56b8d5fdb8-7gp4n" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.368217 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-56b8d5fdb8-7gp4n" podUID="2af09aa4-9ce8-411f-8634-ac7eb7909555" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.165:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.165:8443: connect: connection refused" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.396302 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3b42ddad-62b5-482e-b7e4-015f4e138979","Type":"ContainerStarted","Data":"38c930f5f292cb81617e1cab3271f8756ead170e88ed87ee41257189a0fe6115"} Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.400350 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.403727 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"35cf59e4-f205-4f9f-90ba-358c0fb38048","Type":"ContainerDied","Data":"906031cadc1ec9e41e90618aca46f8bfcc4aa4bd0257893f1bffb234bfaa0cca"} Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.403768 4752 scope.go:117] "RemoveContainer" containerID="c7d8e5efb1962d3e1ba0c8089d296aa795cf9bb7df68be1a34846f0b1dd77e8e" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.450321 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35cf59e4-f205-4f9f-90ba-358c0fb38048-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.489046 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dd959b98c-qv4gw" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.506972 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.538879 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.552073 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b294f47a-d420-4cee-b974-315f75bb89d5-config\") pod \"b294f47a-d420-4cee-b974-315f75bb89d5\" (UID: \"b294f47a-d420-4cee-b974-315f75bb89d5\") " Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.552163 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b294f47a-d420-4cee-b974-315f75bb89d5-ovsdbserver-sb\") pod \"b294f47a-d420-4cee-b974-315f75bb89d5\" (UID: \"b294f47a-d420-4cee-b974-315f75bb89d5\") " Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.552243 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b294f47a-d420-4cee-b974-315f75bb89d5-dns-swift-storage-0\") pod \"b294f47a-d420-4cee-b974-315f75bb89d5\" (UID: \"b294f47a-d420-4cee-b974-315f75bb89d5\") " Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.552275 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b294f47a-d420-4cee-b974-315f75bb89d5-dns-svc\") pod \"b294f47a-d420-4cee-b974-315f75bb89d5\" (UID: \"b294f47a-d420-4cee-b974-315f75bb89d5\") " Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.552347 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b294f47a-d420-4cee-b974-315f75bb89d5-ovsdbserver-nb\") pod \"b294f47a-d420-4cee-b974-315f75bb89d5\" (UID: \"b294f47a-d420-4cee-b974-315f75bb89d5\") " Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.552386 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zkh8\" (UniqueName: \"kubernetes.io/projected/b294f47a-d420-4cee-b974-315f75bb89d5-kube-api-access-9zkh8\") pod \"b294f47a-d420-4cee-b974-315f75bb89d5\" (UID: \"b294f47a-d420-4cee-b974-315f75bb89d5\") " Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.558516 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b294f47a-d420-4cee-b974-315f75bb89d5-kube-api-access-9zkh8" (OuterVolumeSpecName: "kube-api-access-9zkh8") pod "b294f47a-d420-4cee-b974-315f75bb89d5" (UID: "b294f47a-d420-4cee-b974-315f75bb89d5"). InnerVolumeSpecName "kube-api-access-9zkh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.564215 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 22 10:44:50 crc kubenswrapper[4752]: E0122 10:44:50.564673 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b294f47a-d420-4cee-b974-315f75bb89d5" containerName="init" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.564695 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="b294f47a-d420-4cee-b974-315f75bb89d5" containerName="init" Jan 22 10:44:50 crc kubenswrapper[4752]: E0122 10:44:50.564706 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35cf59e4-f205-4f9f-90ba-358c0fb38048" containerName="watcher-decision-engine" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.564716 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="35cf59e4-f205-4f9f-90ba-358c0fb38048" containerName="watcher-decision-engine" Jan 22 10:44:50 crc kubenswrapper[4752]: E0122 10:44:50.564729 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b294f47a-d420-4cee-b974-315f75bb89d5" containerName="dnsmasq-dns" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.564736 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="b294f47a-d420-4cee-b974-315f75bb89d5" containerName="dnsmasq-dns" Jan 22 10:44:50 crc kubenswrapper[4752]: E0122 10:44:50.564757 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35cf59e4-f205-4f9f-90ba-358c0fb38048" containerName="watcher-decision-engine" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.564764 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="35cf59e4-f205-4f9f-90ba-358c0fb38048" containerName="watcher-decision-engine" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.564989 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="35cf59e4-f205-4f9f-90ba-358c0fb38048" containerName="watcher-decision-engine" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.565015 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="35cf59e4-f205-4f9f-90ba-358c0fb38048" containerName="watcher-decision-engine" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.565032 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="b294f47a-d420-4cee-b974-315f75bb89d5" containerName="dnsmasq-dns" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.578025 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.589033 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.589192 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.685536 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/806e176d-686f-4523-822c-f519f6a6076d-config-data\") pod \"watcher-decision-engine-0\" (UID: \"806e176d-686f-4523-822c-f519f6a6076d\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.686191 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/806e176d-686f-4523-822c-f519f6a6076d-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"806e176d-686f-4523-822c-f519f6a6076d\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.686270 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g8ng\" (UniqueName: \"kubernetes.io/projected/806e176d-686f-4523-822c-f519f6a6076d-kube-api-access-8g8ng\") pod \"watcher-decision-engine-0\" (UID: \"806e176d-686f-4523-822c-f519f6a6076d\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.686347 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/806e176d-686f-4523-822c-f519f6a6076d-logs\") pod \"watcher-decision-engine-0\" (UID: \"806e176d-686f-4523-822c-f519f6a6076d\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.686376 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/806e176d-686f-4523-822c-f519f6a6076d-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"806e176d-686f-4523-822c-f519f6a6076d\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.686567 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zkh8\" (UniqueName: \"kubernetes.io/projected/b294f47a-d420-4cee-b974-315f75bb89d5-kube-api-access-9zkh8\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.723750 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b294f47a-d420-4cee-b974-315f75bb89d5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b294f47a-d420-4cee-b974-315f75bb89d5" (UID: "b294f47a-d420-4cee-b974-315f75bb89d5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.755164 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.755370 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="01b27ce5-269a-4d59-b7bf-51a805357d4a" containerName="watcher-api-log" containerID="cri-o://6e1db621e927b9d34801b2b2e5e9205c41ba799cf6fb7841043f036d9a3df9f7" gracePeriod=30 Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.755753 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="01b27ce5-269a-4d59-b7bf-51a805357d4a" containerName="watcher-api" containerID="cri-o://a3ba8efb1a4e8e51e3a3fa08edb53da661c6de5134ab05d958bb1548b6572d84" gracePeriod=30 Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.765100 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b294f47a-d420-4cee-b974-315f75bb89d5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b294f47a-d420-4cee-b974-315f75bb89d5" (UID: "b294f47a-d420-4cee-b974-315f75bb89d5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.769896 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b294f47a-d420-4cee-b974-315f75bb89d5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b294f47a-d420-4cee-b974-315f75bb89d5" (UID: "b294f47a-d420-4cee-b974-315f75bb89d5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.774942 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b294f47a-d420-4cee-b974-315f75bb89d5-config" (OuterVolumeSpecName: "config") pod "b294f47a-d420-4cee-b974-315f75bb89d5" (UID: "b294f47a-d420-4cee-b974-315f75bb89d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.794900 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/806e176d-686f-4523-822c-f519f6a6076d-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"806e176d-686f-4523-822c-f519f6a6076d\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.795215 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g8ng\" (UniqueName: \"kubernetes.io/projected/806e176d-686f-4523-822c-f519f6a6076d-kube-api-access-8g8ng\") pod \"watcher-decision-engine-0\" (UID: \"806e176d-686f-4523-822c-f519f6a6076d\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.795258 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/806e176d-686f-4523-822c-f519f6a6076d-logs\") pod \"watcher-decision-engine-0\" (UID: \"806e176d-686f-4523-822c-f519f6a6076d\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.795290 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/806e176d-686f-4523-822c-f519f6a6076d-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"806e176d-686f-4523-822c-f519f6a6076d\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.795369 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/806e176d-686f-4523-822c-f519f6a6076d-config-data\") pod \"watcher-decision-engine-0\" (UID: \"806e176d-686f-4523-822c-f519f6a6076d\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.795454 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b294f47a-d420-4cee-b974-315f75bb89d5-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.795465 4752 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b294f47a-d420-4cee-b974-315f75bb89d5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.795478 4752 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b294f47a-d420-4cee-b974-315f75bb89d5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.795488 4752 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b294f47a-d420-4cee-b974-315f75bb89d5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.799526 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/806e176d-686f-4523-822c-f519f6a6076d-config-data\") pod \"watcher-decision-engine-0\" (UID: \"806e176d-686f-4523-822c-f519f6a6076d\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.800236 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/806e176d-686f-4523-822c-f519f6a6076d-logs\") pod \"watcher-decision-engine-0\" (UID: \"806e176d-686f-4523-822c-f519f6a6076d\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.795246 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b294f47a-d420-4cee-b974-315f75bb89d5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b294f47a-d420-4cee-b974-315f75bb89d5" (UID: "b294f47a-d420-4cee-b974-315f75bb89d5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.801421 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/806e176d-686f-4523-822c-f519f6a6076d-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"806e176d-686f-4523-822c-f519f6a6076d\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.807979 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/806e176d-686f-4523-822c-f519f6a6076d-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"806e176d-686f-4523-822c-f519f6a6076d\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.820377 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g8ng\" (UniqueName: \"kubernetes.io/projected/806e176d-686f-4523-822c-f519f6a6076d-kube-api-access-8g8ng\") pod \"watcher-decision-engine-0\" (UID: \"806e176d-686f-4523-822c-f519f6a6076d\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.898570 4752 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b294f47a-d420-4cee-b974-315f75bb89d5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:50 crc kubenswrapper[4752]: I0122 10:44:50.918110 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.085294 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hx4lb" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.187527 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35cf59e4-f205-4f9f-90ba-358c0fb38048" path="/var/lib/kubelet/pods/35cf59e4-f205-4f9f-90ba-358c0fb38048/volumes" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.206829 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9tr9\" (UniqueName: \"kubernetes.io/projected/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-kube-api-access-k9tr9\") pod \"46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b\" (UID: \"46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b\") " Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.206900 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-credential-keys\") pod \"46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b\" (UID: \"46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b\") " Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.206957 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-config-data\") pod \"46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b\" (UID: \"46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b\") " Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.207137 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-combined-ca-bundle\") pod \"46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b\" (UID: \"46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b\") " Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.207196 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-scripts\") pod \"46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b\" (UID: \"46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b\") " Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.207248 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-fernet-keys\") pod \"46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b\" (UID: \"46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b\") " Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.234847 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b" (UID: "46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.237070 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-kube-api-access-k9tr9" (OuterVolumeSpecName: "kube-api-access-k9tr9") pod "46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b" (UID: "46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b"). InnerVolumeSpecName "kube-api-access-k9tr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.258811 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b" (UID: "46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.258899 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-scripts" (OuterVolumeSpecName: "scripts") pod "46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b" (UID: "46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.297473 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b" (UID: "46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.309758 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.309793 4752 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.309802 4752 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.309811 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9tr9\" (UniqueName: \"kubernetes.io/projected/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-kube-api-access-k9tr9\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.309821 4752 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.330664 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-config-data" (OuterVolumeSpecName: "config-data") pod "46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b" (UID: "46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.416937 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.459049 4752 generic.go:334] "Generic (PLEG): container finished" podID="01b27ce5-269a-4d59-b7bf-51a805357d4a" containerID="6e1db621e927b9d34801b2b2e5e9205c41ba799cf6fb7841043f036d9a3df9f7" exitCode=143 Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.463089 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"01b27ce5-269a-4d59-b7bf-51a805357d4a","Type":"ContainerDied","Data":"6e1db621e927b9d34801b2b2e5e9205c41ba799cf6fb7841043f036d9a3df9f7"} Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.481549 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dd959b98c-qv4gw" event={"ID":"b294f47a-d420-4cee-b974-315f75bb89d5","Type":"ContainerDied","Data":"7aac070cb790d73fedcb3dada76fe482e922553e20527b4e24cd87f1d0971b84"} Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.481610 4752 scope.go:117] "RemoveContainer" containerID="b2208e23dcfebbca325be51b7daa39bc7d089105ec533c28e4bdb26559be9c63" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.481745 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dd959b98c-qv4gw" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.486697 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5847fc978b-f7xwp"] Jan 22 10:44:51 crc kubenswrapper[4752]: E0122 10:44:51.487112 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b" containerName="keystone-bootstrap" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.487134 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b" containerName="keystone-bootstrap" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.487325 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b" containerName="keystone-bootstrap" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.487902 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5847fc978b-f7xwp" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.498185 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5847fc978b-f7xwp"] Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.502319 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.502571 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.505339 4752 generic.go:334] "Generic (PLEG): container finished" podID="a8ef3108-d8e0-424d-be70-8bcab25d2c0b" containerID="f247efb91eabfe37e9509c27f0392b9f4e1adc74db40482c9c27e41d5d314581" exitCode=0 Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.505372 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bznl9" event={"ID":"a8ef3108-d8e0-424d-be70-8bcab25d2c0b","Type":"ContainerDied","Data":"f247efb91eabfe37e9509c27f0392b9f4e1adc74db40482c9c27e41d5d314581"} Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.538375 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hx4lb" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.541093 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hx4lb" event={"ID":"46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b","Type":"ContainerDied","Data":"9bcca514dbafd32a7e9f5d2f02ff9d4dde813384969963fc884b000df8178b3a"} Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.541120 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bcca514dbafd32a7e9f5d2f02ff9d4dde813384969963fc884b000df8178b3a" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.557961 4752 generic.go:334] "Generic (PLEG): container finished" podID="751b5593-10c9-46a0-bb4d-141ecbc13e10" containerID="29bc997ed8f486a7121465c5267eda002b975eb11f95217e63829bcb7ea468d1" exitCode=0 Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.558073 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fwcgf" event={"ID":"751b5593-10c9-46a0-bb4d-141ecbc13e10","Type":"ContainerDied","Data":"29bc997ed8f486a7121465c5267eda002b975eb11f95217e63829bcb7ea468d1"} Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.567703 4752 generic.go:334] "Generic (PLEG): container finished" podID="d6b5da9f-e5d4-4629-89a9-1d215475d3bb" containerID="212946e0b17e97fef2a8b480531a5ebbe9538ec76317ca7954449d750273cf32" exitCode=0 Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.567784 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qht9v" event={"ID":"d6b5da9f-e5d4-4629-89a9-1d215475d3bb","Type":"ContainerDied","Data":"212946e0b17e97fef2a8b480531a5ebbe9538ec76317ca7954449d750273cf32"} Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.573024 4752 scope.go:117] "RemoveContainer" containerID="4d2216276f981dc04e8089edc4a2d6dfcbdba2fb2b8ac05e5aeda5f685347008" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.594237 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd370da5-83df-42ba-a822-7cff763d174b","Type":"ContainerStarted","Data":"54be2e363e89413c2d58353d464b750d052099d2757fbd887fd4719dc710a8d2"} Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.622335 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lvs2\" (UniqueName: \"kubernetes.io/projected/e89f840e-6565-49f3-8d71-839114e8044b-kube-api-access-5lvs2\") pod \"keystone-5847fc978b-f7xwp\" (UID: \"e89f840e-6565-49f3-8d71-839114e8044b\") " pod="openstack/keystone-5847fc978b-f7xwp" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.622404 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e89f840e-6565-49f3-8d71-839114e8044b-credential-keys\") pod \"keystone-5847fc978b-f7xwp\" (UID: \"e89f840e-6565-49f3-8d71-839114e8044b\") " pod="openstack/keystone-5847fc978b-f7xwp" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.622629 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89f840e-6565-49f3-8d71-839114e8044b-combined-ca-bundle\") pod \"keystone-5847fc978b-f7xwp\" (UID: \"e89f840e-6565-49f3-8d71-839114e8044b\") " pod="openstack/keystone-5847fc978b-f7xwp" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.622795 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e89f840e-6565-49f3-8d71-839114e8044b-public-tls-certs\") pod \"keystone-5847fc978b-f7xwp\" (UID: \"e89f840e-6565-49f3-8d71-839114e8044b\") " pod="openstack/keystone-5847fc978b-f7xwp" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.622865 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e89f840e-6565-49f3-8d71-839114e8044b-scripts\") pod \"keystone-5847fc978b-f7xwp\" (UID: \"e89f840e-6565-49f3-8d71-839114e8044b\") " pod="openstack/keystone-5847fc978b-f7xwp" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.622920 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e89f840e-6565-49f3-8d71-839114e8044b-internal-tls-certs\") pod \"keystone-5847fc978b-f7xwp\" (UID: \"e89f840e-6565-49f3-8d71-839114e8044b\") " pod="openstack/keystone-5847fc978b-f7xwp" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.623113 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89f840e-6565-49f3-8d71-839114e8044b-config-data\") pod \"keystone-5847fc978b-f7xwp\" (UID: \"e89f840e-6565-49f3-8d71-839114e8044b\") " pod="openstack/keystone-5847fc978b-f7xwp" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.623158 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e89f840e-6565-49f3-8d71-839114e8044b-fernet-keys\") pod \"keystone-5847fc978b-f7xwp\" (UID: \"e89f840e-6565-49f3-8d71-839114e8044b\") " pod="openstack/keystone-5847fc978b-f7xwp" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.623456 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dd959b98c-qv4gw"] Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.639897 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dd959b98c-qv4gw"] Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.691146 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.724277 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89f840e-6565-49f3-8d71-839114e8044b-config-data\") pod \"keystone-5847fc978b-f7xwp\" (UID: \"e89f840e-6565-49f3-8d71-839114e8044b\") " pod="openstack/keystone-5847fc978b-f7xwp" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.724321 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e89f840e-6565-49f3-8d71-839114e8044b-fernet-keys\") pod \"keystone-5847fc978b-f7xwp\" (UID: \"e89f840e-6565-49f3-8d71-839114e8044b\") " pod="openstack/keystone-5847fc978b-f7xwp" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.724354 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lvs2\" (UniqueName: \"kubernetes.io/projected/e89f840e-6565-49f3-8d71-839114e8044b-kube-api-access-5lvs2\") pod \"keystone-5847fc978b-f7xwp\" (UID: \"e89f840e-6565-49f3-8d71-839114e8044b\") " pod="openstack/keystone-5847fc978b-f7xwp" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.724393 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e89f840e-6565-49f3-8d71-839114e8044b-credential-keys\") pod \"keystone-5847fc978b-f7xwp\" (UID: \"e89f840e-6565-49f3-8d71-839114e8044b\") " pod="openstack/keystone-5847fc978b-f7xwp" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.724438 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89f840e-6565-49f3-8d71-839114e8044b-combined-ca-bundle\") pod \"keystone-5847fc978b-f7xwp\" (UID: \"e89f840e-6565-49f3-8d71-839114e8044b\") " pod="openstack/keystone-5847fc978b-f7xwp" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.724479 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e89f840e-6565-49f3-8d71-839114e8044b-public-tls-certs\") pod \"keystone-5847fc978b-f7xwp\" (UID: \"e89f840e-6565-49f3-8d71-839114e8044b\") " pod="openstack/keystone-5847fc978b-f7xwp" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.724498 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e89f840e-6565-49f3-8d71-839114e8044b-scripts\") pod \"keystone-5847fc978b-f7xwp\" (UID: \"e89f840e-6565-49f3-8d71-839114e8044b\") " pod="openstack/keystone-5847fc978b-f7xwp" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.724518 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e89f840e-6565-49f3-8d71-839114e8044b-internal-tls-certs\") pod \"keystone-5847fc978b-f7xwp\" (UID: \"e89f840e-6565-49f3-8d71-839114e8044b\") " pod="openstack/keystone-5847fc978b-f7xwp" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.729350 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e89f840e-6565-49f3-8d71-839114e8044b-internal-tls-certs\") pod \"keystone-5847fc978b-f7xwp\" (UID: \"e89f840e-6565-49f3-8d71-839114e8044b\") " pod="openstack/keystone-5847fc978b-f7xwp" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.739749 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e89f840e-6565-49f3-8d71-839114e8044b-fernet-keys\") pod \"keystone-5847fc978b-f7xwp\" (UID: \"e89f840e-6565-49f3-8d71-839114e8044b\") " pod="openstack/keystone-5847fc978b-f7xwp" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.740724 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89f840e-6565-49f3-8d71-839114e8044b-config-data\") pod \"keystone-5847fc978b-f7xwp\" (UID: \"e89f840e-6565-49f3-8d71-839114e8044b\") " pod="openstack/keystone-5847fc978b-f7xwp" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.745348 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lvs2\" (UniqueName: \"kubernetes.io/projected/e89f840e-6565-49f3-8d71-839114e8044b-kube-api-access-5lvs2\") pod \"keystone-5847fc978b-f7xwp\" (UID: \"e89f840e-6565-49f3-8d71-839114e8044b\") " pod="openstack/keystone-5847fc978b-f7xwp" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.746589 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e89f840e-6565-49f3-8d71-839114e8044b-credential-keys\") pod \"keystone-5847fc978b-f7xwp\" (UID: \"e89f840e-6565-49f3-8d71-839114e8044b\") " pod="openstack/keystone-5847fc978b-f7xwp" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.746876 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89f840e-6565-49f3-8d71-839114e8044b-combined-ca-bundle\") pod \"keystone-5847fc978b-f7xwp\" (UID: \"e89f840e-6565-49f3-8d71-839114e8044b\") " pod="openstack/keystone-5847fc978b-f7xwp" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.753175 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e89f840e-6565-49f3-8d71-839114e8044b-scripts\") pod \"keystone-5847fc978b-f7xwp\" (UID: \"e89f840e-6565-49f3-8d71-839114e8044b\") " pod="openstack/keystone-5847fc978b-f7xwp" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.753381 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e89f840e-6565-49f3-8d71-839114e8044b-public-tls-certs\") pod \"keystone-5847fc978b-f7xwp\" (UID: \"e89f840e-6565-49f3-8d71-839114e8044b\") " pod="openstack/keystone-5847fc978b-f7xwp" Jan 22 10:44:51 crc kubenswrapper[4752]: I0122 10:44:51.860353 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5847fc978b-f7xwp" Jan 22 10:44:52 crc kubenswrapper[4752]: I0122 10:44:52.381977 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="01b27ce5-269a-4d59-b7bf-51a805357d4a" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.168:9322/\": read tcp 10.217.0.2:50934->10.217.0.168:9322: read: connection reset by peer" Jan 22 10:44:52 crc kubenswrapper[4752]: I0122 10:44:52.382035 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="01b27ce5-269a-4d59-b7bf-51a805357d4a" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.168:9322/\": read tcp 10.217.0.2:50922->10.217.0.168:9322: read: connection reset by peer" Jan 22 10:44:52 crc kubenswrapper[4752]: W0122 10:44:52.516050 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode89f840e_6565_49f3_8d71_839114e8044b.slice/crio-7eb7f225ed7d850bb22bdc05f37bdb976f5170ee3dda7277a4b8f54bf09948b9 WatchSource:0}: Error finding container 7eb7f225ed7d850bb22bdc05f37bdb976f5170ee3dda7277a4b8f54bf09948b9: Status 404 returned error can't find the container with id 7eb7f225ed7d850bb22bdc05f37bdb976f5170ee3dda7277a4b8f54bf09948b9 Jan 22 10:44:52 crc kubenswrapper[4752]: I0122 10:44:52.519565 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5847fc978b-f7xwp"] Jan 22 10:44:52 crc kubenswrapper[4752]: I0122 10:44:52.666653 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"835169c7-e182-4674-adc7-18ef50e6a906","Type":"ContainerStarted","Data":"06d020445b695ddfa8171298ddf6d2f583fc92ca26e0f0200d92bf69bad86886"} Jan 22 10:44:52 crc kubenswrapper[4752]: I0122 10:44:52.669401 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3b42ddad-62b5-482e-b7e4-015f4e138979","Type":"ContainerStarted","Data":"df6b42e03466da56b943ece32e867a0f32b63e3bea2146c72b25a466fca70cd0"} Jan 22 10:44:52 crc kubenswrapper[4752]: I0122 10:44:52.677216 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5847fc978b-f7xwp" event={"ID":"e89f840e-6565-49f3-8d71-839114e8044b","Type":"ContainerStarted","Data":"7eb7f225ed7d850bb22bdc05f37bdb976f5170ee3dda7277a4b8f54bf09948b9"} Jan 22 10:44:52 crc kubenswrapper[4752]: I0122 10:44:52.682645 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6vrvk" event={"ID":"e48968c0-ac21-49af-9161-19bf5e37c9eb","Type":"ContainerStarted","Data":"369d75194e907a9b894f8ee65e5481b809e7bfb63d02aac3dc501f1bf7ff3256"} Jan 22 10:44:52 crc kubenswrapper[4752]: I0122 10:44:52.689142 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"806e176d-686f-4523-822c-f519f6a6076d","Type":"ContainerStarted","Data":"f20b462d0c76ace0ee5befef86ad31d99bd028e8380e59b067ace82adc442b92"} Jan 22 10:44:52 crc kubenswrapper[4752]: I0122 10:44:52.689193 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"806e176d-686f-4523-822c-f519f6a6076d","Type":"ContainerStarted","Data":"a4778b5a517ac80cf1d0856c02a21949158ed3f84ceea47452a6724fe23639b9"} Jan 22 10:44:52 crc kubenswrapper[4752]: I0122 10:44:52.699224 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=9.699206564 podStartE2EDuration="9.699206564s" podCreationTimestamp="2026-01-22 10:44:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:44:52.69710518 +0000 UTC m=+1171.927048088" watchObservedRunningTime="2026-01-22 10:44:52.699206564 +0000 UTC m=+1171.929149472" Jan 22 10:44:52 crc kubenswrapper[4752]: I0122 10:44:52.726338 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-6vrvk" podStartSLOduration=8.841522226 podStartE2EDuration="52.726320414s" podCreationTimestamp="2026-01-22 10:44:00 +0000 UTC" firstStartedPulling="2026-01-22 10:44:06.569717368 +0000 UTC m=+1125.799660276" lastFinishedPulling="2026-01-22 10:44:50.454515556 +0000 UTC m=+1169.684458464" observedRunningTime="2026-01-22 10:44:52.724062287 +0000 UTC m=+1171.954005195" watchObservedRunningTime="2026-01-22 10:44:52.726320414 +0000 UTC m=+1171.956263322" Jan 22 10:44:52 crc kubenswrapper[4752]: I0122 10:44:52.751164 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.751146167 podStartE2EDuration="2.751146167s" podCreationTimestamp="2026-01-22 10:44:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:44:52.745665927 +0000 UTC m=+1171.975608825" watchObservedRunningTime="2026-01-22 10:44:52.751146167 +0000 UTC m=+1171.981089075" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.116587 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b294f47a-d420-4cee-b974-315f75bb89d5" path="/var/lib/kubelet/pods/b294f47a-d420-4cee-b974-315f75bb89d5/volumes" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.427070 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fwcgf" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.465544 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/751b5593-10c9-46a0-bb4d-141ecbc13e10-combined-ca-bundle\") pod \"751b5593-10c9-46a0-bb4d-141ecbc13e10\" (UID: \"751b5593-10c9-46a0-bb4d-141ecbc13e10\") " Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.465647 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/751b5593-10c9-46a0-bb4d-141ecbc13e10-config\") pod \"751b5593-10c9-46a0-bb4d-141ecbc13e10\" (UID: \"751b5593-10c9-46a0-bb4d-141ecbc13e10\") " Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.465767 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsq9q\" (UniqueName: \"kubernetes.io/projected/751b5593-10c9-46a0-bb4d-141ecbc13e10-kube-api-access-vsq9q\") pod \"751b5593-10c9-46a0-bb4d-141ecbc13e10\" (UID: \"751b5593-10c9-46a0-bb4d-141ecbc13e10\") " Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.495321 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qht9v" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.495700 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bznl9" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.497785 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/751b5593-10c9-46a0-bb4d-141ecbc13e10-kube-api-access-vsq9q" (OuterVolumeSpecName: "kube-api-access-vsq9q") pod "751b5593-10c9-46a0-bb4d-141ecbc13e10" (UID: "751b5593-10c9-46a0-bb4d-141ecbc13e10"). InnerVolumeSpecName "kube-api-access-vsq9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.513603 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/751b5593-10c9-46a0-bb4d-141ecbc13e10-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "751b5593-10c9-46a0-bb4d-141ecbc13e10" (UID: "751b5593-10c9-46a0-bb4d-141ecbc13e10"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.570145 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk6jd\" (UniqueName: \"kubernetes.io/projected/a8ef3108-d8e0-424d-be70-8bcab25d2c0b-kube-api-access-jk6jd\") pod \"a8ef3108-d8e0-424d-be70-8bcab25d2c0b\" (UID: \"a8ef3108-d8e0-424d-be70-8bcab25d2c0b\") " Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.570507 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6b5da9f-e5d4-4629-89a9-1d215475d3bb-config-data\") pod \"d6b5da9f-e5d4-4629-89a9-1d215475d3bb\" (UID: \"d6b5da9f-e5d4-4629-89a9-1d215475d3bb\") " Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.570546 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a8ef3108-d8e0-424d-be70-8bcab25d2c0b-db-sync-config-data\") pod \"a8ef3108-d8e0-424d-be70-8bcab25d2c0b\" (UID: \"a8ef3108-d8e0-424d-be70-8bcab25d2c0b\") " Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.570613 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6b5da9f-e5d4-4629-89a9-1d215475d3bb-combined-ca-bundle\") pod \"d6b5da9f-e5d4-4629-89a9-1d215475d3bb\" (UID: \"d6b5da9f-e5d4-4629-89a9-1d215475d3bb\") " Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.570680 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv7bz\" (UniqueName: \"kubernetes.io/projected/d6b5da9f-e5d4-4629-89a9-1d215475d3bb-kube-api-access-kv7bz\") pod \"d6b5da9f-e5d4-4629-89a9-1d215475d3bb\" (UID: \"d6b5da9f-e5d4-4629-89a9-1d215475d3bb\") " Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.570802 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6b5da9f-e5d4-4629-89a9-1d215475d3bb-logs\") pod \"d6b5da9f-e5d4-4629-89a9-1d215475d3bb\" (UID: \"d6b5da9f-e5d4-4629-89a9-1d215475d3bb\") " Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.570846 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8ef3108-d8e0-424d-be70-8bcab25d2c0b-combined-ca-bundle\") pod \"a8ef3108-d8e0-424d-be70-8bcab25d2c0b\" (UID: \"a8ef3108-d8e0-424d-be70-8bcab25d2c0b\") " Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.570900 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6b5da9f-e5d4-4629-89a9-1d215475d3bb-scripts\") pod \"d6b5da9f-e5d4-4629-89a9-1d215475d3bb\" (UID: \"d6b5da9f-e5d4-4629-89a9-1d215475d3bb\") " Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.575302 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/751b5593-10c9-46a0-bb4d-141ecbc13e10-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.575332 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsq9q\" (UniqueName: \"kubernetes.io/projected/751b5593-10c9-46a0-bb4d-141ecbc13e10-kube-api-access-vsq9q\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.584295 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6b5da9f-e5d4-4629-89a9-1d215475d3bb-logs" (OuterVolumeSpecName: "logs") pod "d6b5da9f-e5d4-4629-89a9-1d215475d3bb" (UID: "d6b5da9f-e5d4-4629-89a9-1d215475d3bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.589706 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6b5da9f-e5d4-4629-89a9-1d215475d3bb-scripts" (OuterVolumeSpecName: "scripts") pod "d6b5da9f-e5d4-4629-89a9-1d215475d3bb" (UID: "d6b5da9f-e5d4-4629-89a9-1d215475d3bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.589986 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8ef3108-d8e0-424d-be70-8bcab25d2c0b-kube-api-access-jk6jd" (OuterVolumeSpecName: "kube-api-access-jk6jd") pod "a8ef3108-d8e0-424d-be70-8bcab25d2c0b" (UID: "a8ef3108-d8e0-424d-be70-8bcab25d2c0b"). InnerVolumeSpecName "kube-api-access-jk6jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.596334 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6b5da9f-e5d4-4629-89a9-1d215475d3bb-kube-api-access-kv7bz" (OuterVolumeSpecName: "kube-api-access-kv7bz") pod "d6b5da9f-e5d4-4629-89a9-1d215475d3bb" (UID: "d6b5da9f-e5d4-4629-89a9-1d215475d3bb"). InnerVolumeSpecName "kube-api-access-kv7bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.637014 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8ef3108-d8e0-424d-be70-8bcab25d2c0b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a8ef3108-d8e0-424d-be70-8bcab25d2c0b" (UID: "a8ef3108-d8e0-424d-be70-8bcab25d2c0b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.648151 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6b5da9f-e5d4-4629-89a9-1d215475d3bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d6b5da9f-e5d4-4629-89a9-1d215475d3bb" (UID: "d6b5da9f-e5d4-4629-89a9-1d215475d3bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.660136 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/751b5593-10c9-46a0-bb4d-141ecbc13e10-config" (OuterVolumeSpecName: "config") pod "751b5593-10c9-46a0-bb4d-141ecbc13e10" (UID: "751b5593-10c9-46a0-bb4d-141ecbc13e10"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.678730 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6b5da9f-e5d4-4629-89a9-1d215475d3bb-config-data" (OuterVolumeSpecName: "config-data") pod "d6b5da9f-e5d4-4629-89a9-1d215475d3bb" (UID: "d6b5da9f-e5d4-4629-89a9-1d215475d3bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.680250 4752 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6b5da9f-e5d4-4629-89a9-1d215475d3bb-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.680273 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/751b5593-10c9-46a0-bb4d-141ecbc13e10-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.680318 4752 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6b5da9f-e5d4-4629-89a9-1d215475d3bb-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.680328 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jk6jd\" (UniqueName: \"kubernetes.io/projected/a8ef3108-d8e0-424d-be70-8bcab25d2c0b-kube-api-access-jk6jd\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.680338 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6b5da9f-e5d4-4629-89a9-1d215475d3bb-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.680348 4752 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a8ef3108-d8e0-424d-be70-8bcab25d2c0b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.680356 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6b5da9f-e5d4-4629-89a9-1d215475d3bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.680364 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kv7bz\" (UniqueName: \"kubernetes.io/projected/d6b5da9f-e5d4-4629-89a9-1d215475d3bb-kube-api-access-kv7bz\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.686031 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8ef3108-d8e0-424d-be70-8bcab25d2c0b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8ef3108-d8e0-424d-be70-8bcab25d2c0b" (UID: "a8ef3108-d8e0-424d-be70-8bcab25d2c0b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.721901 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qht9v" event={"ID":"d6b5da9f-e5d4-4629-89a9-1d215475d3bb","Type":"ContainerDied","Data":"cdd4d56e17e12f2a1183e0fc172de80c3c9296c80dac10c351a1363f44b5a8da"} Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.721938 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdd4d56e17e12f2a1183e0fc172de80c3c9296c80dac10c351a1363f44b5a8da" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.722005 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qht9v" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.744093 4752 generic.go:334] "Generic (PLEG): container finished" podID="01b27ce5-269a-4d59-b7bf-51a805357d4a" containerID="a3ba8efb1a4e8e51e3a3fa08edb53da661c6de5134ab05d958bb1548b6572d84" exitCode=0 Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.744212 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.744194 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"01b27ce5-269a-4d59-b7bf-51a805357d4a","Type":"ContainerDied","Data":"a3ba8efb1a4e8e51e3a3fa08edb53da661c6de5134ab05d958bb1548b6572d84"} Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.750329 4752 scope.go:117] "RemoveContainer" containerID="a3ba8efb1a4e8e51e3a3fa08edb53da661c6de5134ab05d958bb1548b6572d84" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.771023 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3b42ddad-62b5-482e-b7e4-015f4e138979","Type":"ContainerStarted","Data":"9f5d0bd9c13f4f365c3572dede35d8b13ecb1864df5491c7af8d2cd90149180a"} Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.785057 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbz8k\" (UniqueName: \"kubernetes.io/projected/01b27ce5-269a-4d59-b7bf-51a805357d4a-kube-api-access-rbz8k\") pod \"01b27ce5-269a-4d59-b7bf-51a805357d4a\" (UID: \"01b27ce5-269a-4d59-b7bf-51a805357d4a\") " Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.785152 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01b27ce5-269a-4d59-b7bf-51a805357d4a-config-data\") pod \"01b27ce5-269a-4d59-b7bf-51a805357d4a\" (UID: \"01b27ce5-269a-4d59-b7bf-51a805357d4a\") " Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.785220 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/01b27ce5-269a-4d59-b7bf-51a805357d4a-custom-prometheus-ca\") pod \"01b27ce5-269a-4d59-b7bf-51a805357d4a\" (UID: \"01b27ce5-269a-4d59-b7bf-51a805357d4a\") " Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.785238 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01b27ce5-269a-4d59-b7bf-51a805357d4a-logs\") pod \"01b27ce5-269a-4d59-b7bf-51a805357d4a\" (UID: \"01b27ce5-269a-4d59-b7bf-51a805357d4a\") " Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.785261 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01b27ce5-269a-4d59-b7bf-51a805357d4a-combined-ca-bundle\") pod \"01b27ce5-269a-4d59-b7bf-51a805357d4a\" (UID: \"01b27ce5-269a-4d59-b7bf-51a805357d4a\") " Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.785634 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8ef3108-d8e0-424d-be70-8bcab25d2c0b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.797966 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5847fc978b-f7xwp" event={"ID":"e89f840e-6565-49f3-8d71-839114e8044b","Type":"ContainerStarted","Data":"e6996685fa03753caa09b101b4d4d456a64325d75ab4163c961b3bc1582746df"} Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.798012 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5847fc978b-f7xwp" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.798289 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01b27ce5-269a-4d59-b7bf-51a805357d4a-logs" (OuterVolumeSpecName: "logs") pod "01b27ce5-269a-4d59-b7bf-51a805357d4a" (UID: "01b27ce5-269a-4d59-b7bf-51a805357d4a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.802129 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01b27ce5-269a-4d59-b7bf-51a805357d4a-kube-api-access-rbz8k" (OuterVolumeSpecName: "kube-api-access-rbz8k") pod "01b27ce5-269a-4d59-b7bf-51a805357d4a" (UID: "01b27ce5-269a-4d59-b7bf-51a805357d4a"). InnerVolumeSpecName "kube-api-access-rbz8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.829163 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bznl9" event={"ID":"a8ef3108-d8e0-424d-be70-8bcab25d2c0b","Type":"ContainerDied","Data":"8a02d412d65cc01a5b3a33e6b55a65439eb5b30887be59d9ce804b3e19697c06"} Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.829205 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a02d412d65cc01a5b3a33e6b55a65439eb5b30887be59d9ce804b3e19697c06" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.829290 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bznl9" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.858540 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fwcgf" event={"ID":"751b5593-10c9-46a0-bb4d-141ecbc13e10","Type":"ContainerDied","Data":"7fa89dea8f4a7c8d4b3b348071b713d3bc8b468cfd5059e59be01b1cc580bae8"} Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.858598 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fa89dea8f4a7c8d4b3b348071b713d3bc8b468cfd5059e59be01b1cc580bae8" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.858985 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fwcgf" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.867249 4752 scope.go:117] "RemoveContainer" containerID="6e1db621e927b9d34801b2b2e5e9205c41ba799cf6fb7841043f036d9a3df9f7" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.880118 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01b27ce5-269a-4d59-b7bf-51a805357d4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01b27ce5-269a-4d59-b7bf-51a805357d4a" (UID: "01b27ce5-269a-4d59-b7bf-51a805357d4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.885138 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.885118797 podStartE2EDuration="7.885118797s" podCreationTimestamp="2026-01-22 10:44:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:44:53.830874846 +0000 UTC m=+1173.060817764" watchObservedRunningTime="2026-01-22 10:44:53.885118797 +0000 UTC m=+1173.115061695" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.887277 4752 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01b27ce5-269a-4d59-b7bf-51a805357d4a-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.887299 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01b27ce5-269a-4d59-b7bf-51a805357d4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.887308 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbz8k\" (UniqueName: \"kubernetes.io/projected/01b27ce5-269a-4d59-b7bf-51a805357d4a-kube-api-access-rbz8k\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.901343 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01b27ce5-269a-4d59-b7bf-51a805357d4a-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "01b27ce5-269a-4d59-b7bf-51a805357d4a" (UID: "01b27ce5-269a-4d59-b7bf-51a805357d4a"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.904021 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-86f449cb4d-x9x9z"] Jan 22 10:44:53 crc kubenswrapper[4752]: E0122 10:44:53.905560 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="751b5593-10c9-46a0-bb4d-141ecbc13e10" containerName="neutron-db-sync" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.905583 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="751b5593-10c9-46a0-bb4d-141ecbc13e10" containerName="neutron-db-sync" Jan 22 10:44:53 crc kubenswrapper[4752]: E0122 10:44:53.905598 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01b27ce5-269a-4d59-b7bf-51a805357d4a" containerName="watcher-api-log" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.905604 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="01b27ce5-269a-4d59-b7bf-51a805357d4a" containerName="watcher-api-log" Jan 22 10:44:53 crc kubenswrapper[4752]: E0122 10:44:53.905613 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6b5da9f-e5d4-4629-89a9-1d215475d3bb" containerName="placement-db-sync" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.905620 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6b5da9f-e5d4-4629-89a9-1d215475d3bb" containerName="placement-db-sync" Jan 22 10:44:53 crc kubenswrapper[4752]: E0122 10:44:53.905632 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01b27ce5-269a-4d59-b7bf-51a805357d4a" containerName="watcher-api" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.905638 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="01b27ce5-269a-4d59-b7bf-51a805357d4a" containerName="watcher-api" Jan 22 10:44:53 crc kubenswrapper[4752]: E0122 10:44:53.905664 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8ef3108-d8e0-424d-be70-8bcab25d2c0b" containerName="barbican-db-sync" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.905670 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8ef3108-d8e0-424d-be70-8bcab25d2c0b" containerName="barbican-db-sync" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.905843 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6b5da9f-e5d4-4629-89a9-1d215475d3bb" containerName="placement-db-sync" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.905872 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="01b27ce5-269a-4d59-b7bf-51a805357d4a" containerName="watcher-api-log" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.905888 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="01b27ce5-269a-4d59-b7bf-51a805357d4a" containerName="watcher-api" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.905895 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8ef3108-d8e0-424d-be70-8bcab25d2c0b" containerName="barbican-db-sync" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.905905 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="751b5593-10c9-46a0-bb4d-141ecbc13e10" containerName="neutron-db-sync" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.906837 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-86f449cb4d-x9x9z" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.913819 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-86f449cb4d-x9x9z"] Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.918050 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.918196 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-9jsbv" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.918271 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.918266 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.918471 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.923836 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76855dd879-cshkz"] Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.925389 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76855dd879-cshkz" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.937959 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76855dd879-cshkz"] Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.942250 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5847fc978b-f7xwp" podStartSLOduration=2.942232782 podStartE2EDuration="2.942232782s" podCreationTimestamp="2026-01-22 10:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:44:53.879144565 +0000 UTC m=+1173.109087473" watchObservedRunningTime="2026-01-22 10:44:53.942232782 +0000 UTC m=+1173.172175690" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.942281 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01b27ce5-269a-4d59-b7bf-51a805357d4a-config-data" (OuterVolumeSpecName: "config-data") pod "01b27ce5-269a-4d59-b7bf-51a805357d4a" (UID: "01b27ce5-269a-4d59-b7bf-51a805357d4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.993159 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01b27ce5-269a-4d59-b7bf-51a805357d4a-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:53 crc kubenswrapper[4752]: I0122 10:44:53.993191 4752 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/01b27ce5-269a-4d59-b7bf-51a805357d4a-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.020916 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-59bf4d5494-h8d46"] Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.022363 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59bf4d5494-h8d46" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.031810 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.032140 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.032977 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-l5ss6" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.033118 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.046400 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.046439 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.046462 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59bf4d5494-h8d46"] Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.095126 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d61fa94a-753f-44b1-a435-d362b93af96d-dns-svc\") pod \"dnsmasq-dns-76855dd879-cshkz\" (UID: \"d61fa94a-753f-44b1-a435-d362b93af96d\") " pod="openstack/dnsmasq-dns-76855dd879-cshkz" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.095170 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a57abe3-6121-46d0-9370-58a6ef6e35fe-logs\") pod \"placement-86f449cb4d-x9x9z\" (UID: \"0a57abe3-6121-46d0-9370-58a6ef6e35fe\") " pod="openstack/placement-86f449cb4d-x9x9z" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.095209 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d61fa94a-753f-44b1-a435-d362b93af96d-dns-swift-storage-0\") pod \"dnsmasq-dns-76855dd879-cshkz\" (UID: \"d61fa94a-753f-44b1-a435-d362b93af96d\") " pod="openstack/dnsmasq-dns-76855dd879-cshkz" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.095249 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a57abe3-6121-46d0-9370-58a6ef6e35fe-public-tls-certs\") pod \"placement-86f449cb4d-x9x9z\" (UID: \"0a57abe3-6121-46d0-9370-58a6ef6e35fe\") " pod="openstack/placement-86f449cb4d-x9x9z" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.095287 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a57abe3-6121-46d0-9370-58a6ef6e35fe-internal-tls-certs\") pod \"placement-86f449cb4d-x9x9z\" (UID: \"0a57abe3-6121-46d0-9370-58a6ef6e35fe\") " pod="openstack/placement-86f449cb4d-x9x9z" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.095315 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cecada1-c407-4aff-83a3-9af1f6a94efa-combined-ca-bundle\") pod \"neutron-59bf4d5494-h8d46\" (UID: \"4cecada1-c407-4aff-83a3-9af1f6a94efa\") " pod="openstack/neutron-59bf4d5494-h8d46" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.095348 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6pgz\" (UniqueName: \"kubernetes.io/projected/4cecada1-c407-4aff-83a3-9af1f6a94efa-kube-api-access-f6pgz\") pod \"neutron-59bf4d5494-h8d46\" (UID: \"4cecada1-c407-4aff-83a3-9af1f6a94efa\") " pod="openstack/neutron-59bf4d5494-h8d46" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.095372 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cecada1-c407-4aff-83a3-9af1f6a94efa-ovndb-tls-certs\") pod \"neutron-59bf4d5494-h8d46\" (UID: \"4cecada1-c407-4aff-83a3-9af1f6a94efa\") " pod="openstack/neutron-59bf4d5494-h8d46" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.095400 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4cecada1-c407-4aff-83a3-9af1f6a94efa-httpd-config\") pod \"neutron-59bf4d5494-h8d46\" (UID: \"4cecada1-c407-4aff-83a3-9af1f6a94efa\") " pod="openstack/neutron-59bf4d5494-h8d46" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.095428 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d61fa94a-753f-44b1-a435-d362b93af96d-ovsdbserver-nb\") pod \"dnsmasq-dns-76855dd879-cshkz\" (UID: \"d61fa94a-753f-44b1-a435-d362b93af96d\") " pod="openstack/dnsmasq-dns-76855dd879-cshkz" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.095474 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vb9w\" (UniqueName: \"kubernetes.io/projected/d61fa94a-753f-44b1-a435-d362b93af96d-kube-api-access-4vb9w\") pod \"dnsmasq-dns-76855dd879-cshkz\" (UID: \"d61fa94a-753f-44b1-a435-d362b93af96d\") " pod="openstack/dnsmasq-dns-76855dd879-cshkz" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.095533 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d61fa94a-753f-44b1-a435-d362b93af96d-ovsdbserver-sb\") pod \"dnsmasq-dns-76855dd879-cshkz\" (UID: \"d61fa94a-753f-44b1-a435-d362b93af96d\") " pod="openstack/dnsmasq-dns-76855dd879-cshkz" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.095556 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d61fa94a-753f-44b1-a435-d362b93af96d-config\") pod \"dnsmasq-dns-76855dd879-cshkz\" (UID: \"d61fa94a-753f-44b1-a435-d362b93af96d\") " pod="openstack/dnsmasq-dns-76855dd879-cshkz" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.095579 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4cecada1-c407-4aff-83a3-9af1f6a94efa-config\") pod \"neutron-59bf4d5494-h8d46\" (UID: \"4cecada1-c407-4aff-83a3-9af1f6a94efa\") " pod="openstack/neutron-59bf4d5494-h8d46" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.095605 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a57abe3-6121-46d0-9370-58a6ef6e35fe-scripts\") pod \"placement-86f449cb4d-x9x9z\" (UID: \"0a57abe3-6121-46d0-9370-58a6ef6e35fe\") " pod="openstack/placement-86f449cb4d-x9x9z" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.095633 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a57abe3-6121-46d0-9370-58a6ef6e35fe-config-data\") pod \"placement-86f449cb4d-x9x9z\" (UID: \"0a57abe3-6121-46d0-9370-58a6ef6e35fe\") " pod="openstack/placement-86f449cb4d-x9x9z" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.095665 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a57abe3-6121-46d0-9370-58a6ef6e35fe-combined-ca-bundle\") pod \"placement-86f449cb4d-x9x9z\" (UID: \"0a57abe3-6121-46d0-9370-58a6ef6e35fe\") " pod="openstack/placement-86f449cb4d-x9x9z" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.095688 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wfp6\" (UniqueName: \"kubernetes.io/projected/0a57abe3-6121-46d0-9370-58a6ef6e35fe-kube-api-access-5wfp6\") pod \"placement-86f449cb4d-x9x9z\" (UID: \"0a57abe3-6121-46d0-9370-58a6ef6e35fe\") " pod="openstack/placement-86f449cb4d-x9x9z" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.096090 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.143127 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.196786 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d61fa94a-753f-44b1-a435-d362b93af96d-ovsdbserver-sb\") pod \"dnsmasq-dns-76855dd879-cshkz\" (UID: \"d61fa94a-753f-44b1-a435-d362b93af96d\") " pod="openstack/dnsmasq-dns-76855dd879-cshkz" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.196829 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d61fa94a-753f-44b1-a435-d362b93af96d-config\") pod \"dnsmasq-dns-76855dd879-cshkz\" (UID: \"d61fa94a-753f-44b1-a435-d362b93af96d\") " pod="openstack/dnsmasq-dns-76855dd879-cshkz" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.196867 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4cecada1-c407-4aff-83a3-9af1f6a94efa-config\") pod \"neutron-59bf4d5494-h8d46\" (UID: \"4cecada1-c407-4aff-83a3-9af1f6a94efa\") " pod="openstack/neutron-59bf4d5494-h8d46" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.196889 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a57abe3-6121-46d0-9370-58a6ef6e35fe-scripts\") pod \"placement-86f449cb4d-x9x9z\" (UID: \"0a57abe3-6121-46d0-9370-58a6ef6e35fe\") " pod="openstack/placement-86f449cb4d-x9x9z" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.196923 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a57abe3-6121-46d0-9370-58a6ef6e35fe-config-data\") pod \"placement-86f449cb4d-x9x9z\" (UID: \"0a57abe3-6121-46d0-9370-58a6ef6e35fe\") " pod="openstack/placement-86f449cb4d-x9x9z" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.196962 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a57abe3-6121-46d0-9370-58a6ef6e35fe-combined-ca-bundle\") pod \"placement-86f449cb4d-x9x9z\" (UID: \"0a57abe3-6121-46d0-9370-58a6ef6e35fe\") " pod="openstack/placement-86f449cb4d-x9x9z" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.196984 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wfp6\" (UniqueName: \"kubernetes.io/projected/0a57abe3-6121-46d0-9370-58a6ef6e35fe-kube-api-access-5wfp6\") pod \"placement-86f449cb4d-x9x9z\" (UID: \"0a57abe3-6121-46d0-9370-58a6ef6e35fe\") " pod="openstack/placement-86f449cb4d-x9x9z" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.197017 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d61fa94a-753f-44b1-a435-d362b93af96d-dns-svc\") pod \"dnsmasq-dns-76855dd879-cshkz\" (UID: \"d61fa94a-753f-44b1-a435-d362b93af96d\") " pod="openstack/dnsmasq-dns-76855dd879-cshkz" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.197039 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a57abe3-6121-46d0-9370-58a6ef6e35fe-logs\") pod \"placement-86f449cb4d-x9x9z\" (UID: \"0a57abe3-6121-46d0-9370-58a6ef6e35fe\") " pod="openstack/placement-86f449cb4d-x9x9z" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.197154 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d61fa94a-753f-44b1-a435-d362b93af96d-dns-swift-storage-0\") pod \"dnsmasq-dns-76855dd879-cshkz\" (UID: \"d61fa94a-753f-44b1-a435-d362b93af96d\") " pod="openstack/dnsmasq-dns-76855dd879-cshkz" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.197184 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a57abe3-6121-46d0-9370-58a6ef6e35fe-public-tls-certs\") pod \"placement-86f449cb4d-x9x9z\" (UID: \"0a57abe3-6121-46d0-9370-58a6ef6e35fe\") " pod="openstack/placement-86f449cb4d-x9x9z" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.197214 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a57abe3-6121-46d0-9370-58a6ef6e35fe-internal-tls-certs\") pod \"placement-86f449cb4d-x9x9z\" (UID: \"0a57abe3-6121-46d0-9370-58a6ef6e35fe\") " pod="openstack/placement-86f449cb4d-x9x9z" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.197245 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cecada1-c407-4aff-83a3-9af1f6a94efa-combined-ca-bundle\") pod \"neutron-59bf4d5494-h8d46\" (UID: \"4cecada1-c407-4aff-83a3-9af1f6a94efa\") " pod="openstack/neutron-59bf4d5494-h8d46" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.197269 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6pgz\" (UniqueName: \"kubernetes.io/projected/4cecada1-c407-4aff-83a3-9af1f6a94efa-kube-api-access-f6pgz\") pod \"neutron-59bf4d5494-h8d46\" (UID: \"4cecada1-c407-4aff-83a3-9af1f6a94efa\") " pod="openstack/neutron-59bf4d5494-h8d46" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.197286 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cecada1-c407-4aff-83a3-9af1f6a94efa-ovndb-tls-certs\") pod \"neutron-59bf4d5494-h8d46\" (UID: \"4cecada1-c407-4aff-83a3-9af1f6a94efa\") " pod="openstack/neutron-59bf4d5494-h8d46" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.197313 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4cecada1-c407-4aff-83a3-9af1f6a94efa-httpd-config\") pod \"neutron-59bf4d5494-h8d46\" (UID: \"4cecada1-c407-4aff-83a3-9af1f6a94efa\") " pod="openstack/neutron-59bf4d5494-h8d46" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.197362 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d61fa94a-753f-44b1-a435-d362b93af96d-ovsdbserver-nb\") pod \"dnsmasq-dns-76855dd879-cshkz\" (UID: \"d61fa94a-753f-44b1-a435-d362b93af96d\") " pod="openstack/dnsmasq-dns-76855dd879-cshkz" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.197401 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vb9w\" (UniqueName: \"kubernetes.io/projected/d61fa94a-753f-44b1-a435-d362b93af96d-kube-api-access-4vb9w\") pod \"dnsmasq-dns-76855dd879-cshkz\" (UID: \"d61fa94a-753f-44b1-a435-d362b93af96d\") " pod="openstack/dnsmasq-dns-76855dd879-cshkz" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.198588 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d61fa94a-753f-44b1-a435-d362b93af96d-ovsdbserver-sb\") pod \"dnsmasq-dns-76855dd879-cshkz\" (UID: \"d61fa94a-753f-44b1-a435-d362b93af96d\") " pod="openstack/dnsmasq-dns-76855dd879-cshkz" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.199147 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d61fa94a-753f-44b1-a435-d362b93af96d-config\") pod \"dnsmasq-dns-76855dd879-cshkz\" (UID: \"d61fa94a-753f-44b1-a435-d362b93af96d\") " pod="openstack/dnsmasq-dns-76855dd879-cshkz" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.203546 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a57abe3-6121-46d0-9370-58a6ef6e35fe-logs\") pod \"placement-86f449cb4d-x9x9z\" (UID: \"0a57abe3-6121-46d0-9370-58a6ef6e35fe\") " pod="openstack/placement-86f449cb4d-x9x9z" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.204400 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d61fa94a-753f-44b1-a435-d362b93af96d-dns-svc\") pod \"dnsmasq-dns-76855dd879-cshkz\" (UID: \"d61fa94a-753f-44b1-a435-d362b93af96d\") " pod="openstack/dnsmasq-dns-76855dd879-cshkz" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.205752 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d61fa94a-753f-44b1-a435-d362b93af96d-dns-swift-storage-0\") pod \"dnsmasq-dns-76855dd879-cshkz\" (UID: \"d61fa94a-753f-44b1-a435-d362b93af96d\") " pod="openstack/dnsmasq-dns-76855dd879-cshkz" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.208212 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d61fa94a-753f-44b1-a435-d362b93af96d-ovsdbserver-nb\") pod \"dnsmasq-dns-76855dd879-cshkz\" (UID: \"d61fa94a-753f-44b1-a435-d362b93af96d\") " pod="openstack/dnsmasq-dns-76855dd879-cshkz" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.209089 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4cecada1-c407-4aff-83a3-9af1f6a94efa-config\") pod \"neutron-59bf4d5494-h8d46\" (UID: \"4cecada1-c407-4aff-83a3-9af1f6a94efa\") " pod="openstack/neutron-59bf4d5494-h8d46" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.226774 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wfp6\" (UniqueName: \"kubernetes.io/projected/0a57abe3-6121-46d0-9370-58a6ef6e35fe-kube-api-access-5wfp6\") pod \"placement-86f449cb4d-x9x9z\" (UID: \"0a57abe3-6121-46d0-9370-58a6ef6e35fe\") " pod="openstack/placement-86f449cb4d-x9x9z" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.231744 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cecada1-c407-4aff-83a3-9af1f6a94efa-ovndb-tls-certs\") pod \"neutron-59bf4d5494-h8d46\" (UID: \"4cecada1-c407-4aff-83a3-9af1f6a94efa\") " pod="openstack/neutron-59bf4d5494-h8d46" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.236544 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vb9w\" (UniqueName: \"kubernetes.io/projected/d61fa94a-753f-44b1-a435-d362b93af96d-kube-api-access-4vb9w\") pod \"dnsmasq-dns-76855dd879-cshkz\" (UID: \"d61fa94a-753f-44b1-a435-d362b93af96d\") " pod="openstack/dnsmasq-dns-76855dd879-cshkz" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.237439 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a57abe3-6121-46d0-9370-58a6ef6e35fe-internal-tls-certs\") pod \"placement-86f449cb4d-x9x9z\" (UID: \"0a57abe3-6121-46d0-9370-58a6ef6e35fe\") " pod="openstack/placement-86f449cb4d-x9x9z" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.238214 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a57abe3-6121-46d0-9370-58a6ef6e35fe-scripts\") pod \"placement-86f449cb4d-x9x9z\" (UID: \"0a57abe3-6121-46d0-9370-58a6ef6e35fe\") " pod="openstack/placement-86f449cb4d-x9x9z" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.238257 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4cecada1-c407-4aff-83a3-9af1f6a94efa-httpd-config\") pod \"neutron-59bf4d5494-h8d46\" (UID: \"4cecada1-c407-4aff-83a3-9af1f6a94efa\") " pod="openstack/neutron-59bf4d5494-h8d46" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.238775 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6pgz\" (UniqueName: \"kubernetes.io/projected/4cecada1-c407-4aff-83a3-9af1f6a94efa-kube-api-access-f6pgz\") pod \"neutron-59bf4d5494-h8d46\" (UID: \"4cecada1-c407-4aff-83a3-9af1f6a94efa\") " pod="openstack/neutron-59bf4d5494-h8d46" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.239681 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cecada1-c407-4aff-83a3-9af1f6a94efa-combined-ca-bundle\") pod \"neutron-59bf4d5494-h8d46\" (UID: \"4cecada1-c407-4aff-83a3-9af1f6a94efa\") " pod="openstack/neutron-59bf4d5494-h8d46" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.240493 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a57abe3-6121-46d0-9370-58a6ef6e35fe-combined-ca-bundle\") pod \"placement-86f449cb4d-x9x9z\" (UID: \"0a57abe3-6121-46d0-9370-58a6ef6e35fe\") " pod="openstack/placement-86f449cb4d-x9x9z" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.241722 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a57abe3-6121-46d0-9370-58a6ef6e35fe-public-tls-certs\") pod \"placement-86f449cb4d-x9x9z\" (UID: \"0a57abe3-6121-46d0-9370-58a6ef6e35fe\") " pod="openstack/placement-86f449cb4d-x9x9z" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.256579 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a57abe3-6121-46d0-9370-58a6ef6e35fe-config-data\") pod \"placement-86f449cb4d-x9x9z\" (UID: \"0a57abe3-6121-46d0-9370-58a6ef6e35fe\") " pod="openstack/placement-86f449cb4d-x9x9z" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.259342 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-86f449cb4d-x9x9z" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.272292 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76855dd879-cshkz" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.360013 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59bf4d5494-h8d46" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.791735 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-677cfc4d77-thl6g"] Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.805867 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-677cfc4d77-thl6g" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.809998 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-tldwq" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.810232 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.811702 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.811874 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-677cfc4d77-thl6g"] Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.853202 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-759c77fb4d-d9xcr"] Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.854582 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-759c77fb4d-d9xcr" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.856995 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.903258 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-759c77fb4d-d9xcr"] Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.915761 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e64c278-d203-43c6-9899-8af0a062c3da-config-data-custom\") pod \"barbican-worker-677cfc4d77-thl6g\" (UID: \"2e64c278-d203-43c6-9899-8af0a062c3da\") " pod="openstack/barbican-worker-677cfc4d77-thl6g" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.915820 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e64c278-d203-43c6-9899-8af0a062c3da-config-data\") pod \"barbican-worker-677cfc4d77-thl6g\" (UID: \"2e64c278-d203-43c6-9899-8af0a062c3da\") " pod="openstack/barbican-worker-677cfc4d77-thl6g" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.915881 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e64c278-d203-43c6-9899-8af0a062c3da-logs\") pod \"barbican-worker-677cfc4d77-thl6g\" (UID: \"2e64c278-d203-43c6-9899-8af0a062c3da\") " pod="openstack/barbican-worker-677cfc4d77-thl6g" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.921132 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.921469 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.921486 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"01b27ce5-269a-4d59-b7bf-51a805357d4a","Type":"ContainerDied","Data":"59226a995de447449b63c456140ebf3944c38fa7acfc93169db76886af506c4d"} Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.921934 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.927763 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zkn2\" (UniqueName: \"kubernetes.io/projected/2e64c278-d203-43c6-9899-8af0a062c3da-kube-api-access-9zkn2\") pod \"barbican-worker-677cfc4d77-thl6g\" (UID: \"2e64c278-d203-43c6-9899-8af0a062c3da\") " pod="openstack/barbican-worker-677cfc4d77-thl6g" Jan 22 10:44:54 crc kubenswrapper[4752]: I0122 10:44:54.927822 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e64c278-d203-43c6-9899-8af0a062c3da-combined-ca-bundle\") pod \"barbican-worker-677cfc4d77-thl6g\" (UID: \"2e64c278-d203-43c6-9899-8af0a062c3da\") " pod="openstack/barbican-worker-677cfc4d77-thl6g" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.036258 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e64c278-d203-43c6-9899-8af0a062c3da-config-data\") pod \"barbican-worker-677cfc4d77-thl6g\" (UID: \"2e64c278-d203-43c6-9899-8af0a062c3da\") " pod="openstack/barbican-worker-677cfc4d77-thl6g" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.036563 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e64c278-d203-43c6-9899-8af0a062c3da-logs\") pod \"barbican-worker-677cfc4d77-thl6g\" (UID: \"2e64c278-d203-43c6-9899-8af0a062c3da\") " pod="openstack/barbican-worker-677cfc4d77-thl6g" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.041442 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e64c278-d203-43c6-9899-8af0a062c3da-logs\") pod \"barbican-worker-677cfc4d77-thl6g\" (UID: \"2e64c278-d203-43c6-9899-8af0a062c3da\") " pod="openstack/barbican-worker-677cfc4d77-thl6g" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.042986 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm758\" (UniqueName: \"kubernetes.io/projected/bc924366-228b-413d-b8f5-9f71257add3c-kube-api-access-wm758\") pod \"barbican-keystone-listener-759c77fb4d-d9xcr\" (UID: \"bc924366-228b-413d-b8f5-9f71257add3c\") " pod="openstack/barbican-keystone-listener-759c77fb4d-d9xcr" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.043070 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc924366-228b-413d-b8f5-9f71257add3c-logs\") pod \"barbican-keystone-listener-759c77fb4d-d9xcr\" (UID: \"bc924366-228b-413d-b8f5-9f71257add3c\") " pod="openstack/barbican-keystone-listener-759c77fb4d-d9xcr" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.043173 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc924366-228b-413d-b8f5-9f71257add3c-config-data-custom\") pod \"barbican-keystone-listener-759c77fb4d-d9xcr\" (UID: \"bc924366-228b-413d-b8f5-9f71257add3c\") " pod="openstack/barbican-keystone-listener-759c77fb4d-d9xcr" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.043220 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc924366-228b-413d-b8f5-9f71257add3c-combined-ca-bundle\") pod \"barbican-keystone-listener-759c77fb4d-d9xcr\" (UID: \"bc924366-228b-413d-b8f5-9f71257add3c\") " pod="openstack/barbican-keystone-listener-759c77fb4d-d9xcr" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.045599 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76855dd879-cshkz"] Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.057103 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc924366-228b-413d-b8f5-9f71257add3c-config-data\") pod \"barbican-keystone-listener-759c77fb4d-d9xcr\" (UID: \"bc924366-228b-413d-b8f5-9f71257add3c\") " pod="openstack/barbican-keystone-listener-759c77fb4d-d9xcr" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.057257 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zkn2\" (UniqueName: \"kubernetes.io/projected/2e64c278-d203-43c6-9899-8af0a062c3da-kube-api-access-9zkn2\") pod \"barbican-worker-677cfc4d77-thl6g\" (UID: \"2e64c278-d203-43c6-9899-8af0a062c3da\") " pod="openstack/barbican-worker-677cfc4d77-thl6g" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.057348 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e64c278-d203-43c6-9899-8af0a062c3da-combined-ca-bundle\") pod \"barbican-worker-677cfc4d77-thl6g\" (UID: \"2e64c278-d203-43c6-9899-8af0a062c3da\") " pod="openstack/barbican-worker-677cfc4d77-thl6g" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.057507 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e64c278-d203-43c6-9899-8af0a062c3da-config-data-custom\") pod \"barbican-worker-677cfc4d77-thl6g\" (UID: \"2e64c278-d203-43c6-9899-8af0a062c3da\") " pod="openstack/barbican-worker-677cfc4d77-thl6g" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.065000 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e64c278-d203-43c6-9899-8af0a062c3da-config-data\") pod \"barbican-worker-677cfc4d77-thl6g\" (UID: \"2e64c278-d203-43c6-9899-8af0a062c3da\") " pod="openstack/barbican-worker-677cfc4d77-thl6g" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.085138 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e64c278-d203-43c6-9899-8af0a062c3da-config-data-custom\") pod \"barbican-worker-677cfc4d77-thl6g\" (UID: \"2e64c278-d203-43c6-9899-8af0a062c3da\") " pod="openstack/barbican-worker-677cfc4d77-thl6g" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.086173 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e64c278-d203-43c6-9899-8af0a062c3da-combined-ca-bundle\") pod \"barbican-worker-677cfc4d77-thl6g\" (UID: \"2e64c278-d203-43c6-9899-8af0a062c3da\") " pod="openstack/barbican-worker-677cfc4d77-thl6g" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.086352 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zkn2\" (UniqueName: \"kubernetes.io/projected/2e64c278-d203-43c6-9899-8af0a062c3da-kube-api-access-9zkn2\") pod \"barbican-worker-677cfc4d77-thl6g\" (UID: \"2e64c278-d203-43c6-9899-8af0a062c3da\") " pod="openstack/barbican-worker-677cfc4d77-thl6g" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.119482 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-86f449cb4d-x9x9z"] Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.154408 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-677cfc4d77-thl6g" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.154761 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f5db4cc5-c4kxs"] Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.159132 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f5db4cc5-c4kxs" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.160223 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm758\" (UniqueName: \"kubernetes.io/projected/bc924366-228b-413d-b8f5-9f71257add3c-kube-api-access-wm758\") pod \"barbican-keystone-listener-759c77fb4d-d9xcr\" (UID: \"bc924366-228b-413d-b8f5-9f71257add3c\") " pod="openstack/barbican-keystone-listener-759c77fb4d-d9xcr" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.162753 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc924366-228b-413d-b8f5-9f71257add3c-logs\") pod \"barbican-keystone-listener-759c77fb4d-d9xcr\" (UID: \"bc924366-228b-413d-b8f5-9f71257add3c\") " pod="openstack/barbican-keystone-listener-759c77fb4d-d9xcr" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.162840 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc924366-228b-413d-b8f5-9f71257add3c-config-data-custom\") pod \"barbican-keystone-listener-759c77fb4d-d9xcr\" (UID: \"bc924366-228b-413d-b8f5-9f71257add3c\") " pod="openstack/barbican-keystone-listener-759c77fb4d-d9xcr" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.163039 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc924366-228b-413d-b8f5-9f71257add3c-combined-ca-bundle\") pod \"barbican-keystone-listener-759c77fb4d-d9xcr\" (UID: \"bc924366-228b-413d-b8f5-9f71257add3c\") " pod="openstack/barbican-keystone-listener-759c77fb4d-d9xcr" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.170081 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc924366-228b-413d-b8f5-9f71257add3c-logs\") pod \"barbican-keystone-listener-759c77fb4d-d9xcr\" (UID: \"bc924366-228b-413d-b8f5-9f71257add3c\") " pod="openstack/barbican-keystone-listener-759c77fb4d-d9xcr" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.175269 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc924366-228b-413d-b8f5-9f71257add3c-config-data\") pod \"barbican-keystone-listener-759c77fb4d-d9xcr\" (UID: \"bc924366-228b-413d-b8f5-9f71257add3c\") " pod="openstack/barbican-keystone-listener-759c77fb4d-d9xcr" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.176121 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc924366-228b-413d-b8f5-9f71257add3c-config-data-custom\") pod \"barbican-keystone-listener-759c77fb4d-d9xcr\" (UID: \"bc924366-228b-413d-b8f5-9f71257add3c\") " pod="openstack/barbican-keystone-listener-759c77fb4d-d9xcr" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.181967 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc924366-228b-413d-b8f5-9f71257add3c-combined-ca-bundle\") pod \"barbican-keystone-listener-759c77fb4d-d9xcr\" (UID: \"bc924366-228b-413d-b8f5-9f71257add3c\") " pod="openstack/barbican-keystone-listener-759c77fb4d-d9xcr" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.192874 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc924366-228b-413d-b8f5-9f71257add3c-config-data\") pod \"barbican-keystone-listener-759c77fb4d-d9xcr\" (UID: \"bc924366-228b-413d-b8f5-9f71257add3c\") " pod="openstack/barbican-keystone-listener-759c77fb4d-d9xcr" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.206202 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.211158 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.212334 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm758\" (UniqueName: \"kubernetes.io/projected/bc924366-228b-413d-b8f5-9f71257add3c-kube-api-access-wm758\") pod \"barbican-keystone-listener-759c77fb4d-d9xcr\" (UID: \"bc924366-228b-413d-b8f5-9f71257add3c\") " pod="openstack/barbican-keystone-listener-759c77fb4d-d9xcr" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.231411 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76855dd879-cshkz"] Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.242654 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f5db4cc5-c4kxs"] Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.262545 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.265166 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.268894 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.269082 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.269199 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.278525 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce2ad46c-0d8c-4c7f-8025-89493e874a85-config\") pod \"dnsmasq-dns-f5db4cc5-c4kxs\" (UID: \"ce2ad46c-0d8c-4c7f-8025-89493e874a85\") " pod="openstack/dnsmasq-dns-f5db4cc5-c4kxs" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.278565 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv82b\" (UniqueName: \"kubernetes.io/projected/ce2ad46c-0d8c-4c7f-8025-89493e874a85-kube-api-access-xv82b\") pod \"dnsmasq-dns-f5db4cc5-c4kxs\" (UID: \"ce2ad46c-0d8c-4c7f-8025-89493e874a85\") " pod="openstack/dnsmasq-dns-f5db4cc5-c4kxs" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.278618 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce2ad46c-0d8c-4c7f-8025-89493e874a85-dns-svc\") pod \"dnsmasq-dns-f5db4cc5-c4kxs\" (UID: \"ce2ad46c-0d8c-4c7f-8025-89493e874a85\") " pod="openstack/dnsmasq-dns-f5db4cc5-c4kxs" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.278712 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce2ad46c-0d8c-4c7f-8025-89493e874a85-dns-swift-storage-0\") pod \"dnsmasq-dns-f5db4cc5-c4kxs\" (UID: \"ce2ad46c-0d8c-4c7f-8025-89493e874a85\") " pod="openstack/dnsmasq-dns-f5db4cc5-c4kxs" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.278731 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce2ad46c-0d8c-4c7f-8025-89493e874a85-ovsdbserver-sb\") pod \"dnsmasq-dns-f5db4cc5-c4kxs\" (UID: \"ce2ad46c-0d8c-4c7f-8025-89493e874a85\") " pod="openstack/dnsmasq-dns-f5db4cc5-c4kxs" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.278841 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce2ad46c-0d8c-4c7f-8025-89493e874a85-ovsdbserver-nb\") pod \"dnsmasq-dns-f5db4cc5-c4kxs\" (UID: \"ce2ad46c-0d8c-4c7f-8025-89493e874a85\") " pod="openstack/dnsmasq-dns-f5db4cc5-c4kxs" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.298055 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5ccb469cdb-gdc7d"] Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.305213 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5ccb469cdb-gdc7d" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.312162 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.313620 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.328682 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5ccb469cdb-gdc7d"] Jan 22 10:44:55 crc kubenswrapper[4752]: W0122 10:44:55.369056 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4cecada1_c407_4aff_83a3_9af1f6a94efa.slice/crio-0b5521fde68d2ae06d96c8198e16de38033ba27ecfb4f1ef9dc20f7ead048d4d WatchSource:0}: Error finding container 0b5521fde68d2ae06d96c8198e16de38033ba27ecfb4f1ef9dc20f7ead048d4d: Status 404 returned error can't find the container with id 0b5521fde68d2ae06d96c8198e16de38033ba27ecfb4f1ef9dc20f7ead048d4d Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.372140 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59bf4d5494-h8d46"] Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.381778 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce2ad46c-0d8c-4c7f-8025-89493e874a85-config\") pod \"dnsmasq-dns-f5db4cc5-c4kxs\" (UID: \"ce2ad46c-0d8c-4c7f-8025-89493e874a85\") " pod="openstack/dnsmasq-dns-f5db4cc5-c4kxs" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.381832 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv82b\" (UniqueName: \"kubernetes.io/projected/ce2ad46c-0d8c-4c7f-8025-89493e874a85-kube-api-access-xv82b\") pod \"dnsmasq-dns-f5db4cc5-c4kxs\" (UID: \"ce2ad46c-0d8c-4c7f-8025-89493e874a85\") " pod="openstack/dnsmasq-dns-f5db4cc5-c4kxs" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.381952 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce2ad46c-0d8c-4c7f-8025-89493e874a85-dns-svc\") pod \"dnsmasq-dns-f5db4cc5-c4kxs\" (UID: \"ce2ad46c-0d8c-4c7f-8025-89493e874a85\") " pod="openstack/dnsmasq-dns-f5db4cc5-c4kxs" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.381992 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dad95ced-e983-4d80-8901-8cd6537337cf-combined-ca-bundle\") pod \"barbican-api-5ccb469cdb-gdc7d\" (UID: \"dad95ced-e983-4d80-8901-8cd6537337cf\") " pod="openstack/barbican-api-5ccb469cdb-gdc7d" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.382018 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzcft\" (UniqueName: \"kubernetes.io/projected/dad95ced-e983-4d80-8901-8cd6537337cf-kube-api-access-fzcft\") pod \"barbican-api-5ccb469cdb-gdc7d\" (UID: \"dad95ced-e983-4d80-8901-8cd6537337cf\") " pod="openstack/barbican-api-5ccb469cdb-gdc7d" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.382034 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dad95ced-e983-4d80-8901-8cd6537337cf-config-data-custom\") pod \"barbican-api-5ccb469cdb-gdc7d\" (UID: \"dad95ced-e983-4d80-8901-8cd6537337cf\") " pod="openstack/barbican-api-5ccb469cdb-gdc7d" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.382062 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b14be83f-91df-46ae-87fd-fada733ccb95-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"b14be83f-91df-46ae-87fd-fada733ccb95\") " pod="openstack/watcher-api-0" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.382086 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b14be83f-91df-46ae-87fd-fada733ccb95-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"b14be83f-91df-46ae-87fd-fada733ccb95\") " pod="openstack/watcher-api-0" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.382117 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b14be83f-91df-46ae-87fd-fada733ccb95-logs\") pod \"watcher-api-0\" (UID: \"b14be83f-91df-46ae-87fd-fada733ccb95\") " pod="openstack/watcher-api-0" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.382155 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dad95ced-e983-4d80-8901-8cd6537337cf-logs\") pod \"barbican-api-5ccb469cdb-gdc7d\" (UID: \"dad95ced-e983-4d80-8901-8cd6537337cf\") " pod="openstack/barbican-api-5ccb469cdb-gdc7d" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.382187 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce2ad46c-0d8c-4c7f-8025-89493e874a85-dns-swift-storage-0\") pod \"dnsmasq-dns-f5db4cc5-c4kxs\" (UID: \"ce2ad46c-0d8c-4c7f-8025-89493e874a85\") " pod="openstack/dnsmasq-dns-f5db4cc5-c4kxs" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.382218 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce2ad46c-0d8c-4c7f-8025-89493e874a85-ovsdbserver-sb\") pod \"dnsmasq-dns-f5db4cc5-c4kxs\" (UID: \"ce2ad46c-0d8c-4c7f-8025-89493e874a85\") " pod="openstack/dnsmasq-dns-f5db4cc5-c4kxs" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.382237 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b14be83f-91df-46ae-87fd-fada733ccb95-config-data\") pod \"watcher-api-0\" (UID: \"b14be83f-91df-46ae-87fd-fada733ccb95\") " pod="openstack/watcher-api-0" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.382252 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b14be83f-91df-46ae-87fd-fada733ccb95-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"b14be83f-91df-46ae-87fd-fada733ccb95\") " pod="openstack/watcher-api-0" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.382307 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b14be83f-91df-46ae-87fd-fada733ccb95-public-tls-certs\") pod \"watcher-api-0\" (UID: \"b14be83f-91df-46ae-87fd-fada733ccb95\") " pod="openstack/watcher-api-0" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.382323 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hnf5\" (UniqueName: \"kubernetes.io/projected/b14be83f-91df-46ae-87fd-fada733ccb95-kube-api-access-4hnf5\") pod \"watcher-api-0\" (UID: \"b14be83f-91df-46ae-87fd-fada733ccb95\") " pod="openstack/watcher-api-0" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.382410 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce2ad46c-0d8c-4c7f-8025-89493e874a85-ovsdbserver-nb\") pod \"dnsmasq-dns-f5db4cc5-c4kxs\" (UID: \"ce2ad46c-0d8c-4c7f-8025-89493e874a85\") " pod="openstack/dnsmasq-dns-f5db4cc5-c4kxs" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.382428 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dad95ced-e983-4d80-8901-8cd6537337cf-config-data\") pod \"barbican-api-5ccb469cdb-gdc7d\" (UID: \"dad95ced-e983-4d80-8901-8cd6537337cf\") " pod="openstack/barbican-api-5ccb469cdb-gdc7d" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.382702 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce2ad46c-0d8c-4c7f-8025-89493e874a85-config\") pod \"dnsmasq-dns-f5db4cc5-c4kxs\" (UID: \"ce2ad46c-0d8c-4c7f-8025-89493e874a85\") " pod="openstack/dnsmasq-dns-f5db4cc5-c4kxs" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.383178 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce2ad46c-0d8c-4c7f-8025-89493e874a85-dns-svc\") pod \"dnsmasq-dns-f5db4cc5-c4kxs\" (UID: \"ce2ad46c-0d8c-4c7f-8025-89493e874a85\") " pod="openstack/dnsmasq-dns-f5db4cc5-c4kxs" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.383282 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce2ad46c-0d8c-4c7f-8025-89493e874a85-dns-swift-storage-0\") pod \"dnsmasq-dns-f5db4cc5-c4kxs\" (UID: \"ce2ad46c-0d8c-4c7f-8025-89493e874a85\") " pod="openstack/dnsmasq-dns-f5db4cc5-c4kxs" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.383875 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce2ad46c-0d8c-4c7f-8025-89493e874a85-ovsdbserver-sb\") pod \"dnsmasq-dns-f5db4cc5-c4kxs\" (UID: \"ce2ad46c-0d8c-4c7f-8025-89493e874a85\") " pod="openstack/dnsmasq-dns-f5db4cc5-c4kxs" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.383894 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce2ad46c-0d8c-4c7f-8025-89493e874a85-ovsdbserver-nb\") pod \"dnsmasq-dns-f5db4cc5-c4kxs\" (UID: \"ce2ad46c-0d8c-4c7f-8025-89493e874a85\") " pod="openstack/dnsmasq-dns-f5db4cc5-c4kxs" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.402072 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv82b\" (UniqueName: \"kubernetes.io/projected/ce2ad46c-0d8c-4c7f-8025-89493e874a85-kube-api-access-xv82b\") pod \"dnsmasq-dns-f5db4cc5-c4kxs\" (UID: \"ce2ad46c-0d8c-4c7f-8025-89493e874a85\") " pod="openstack/dnsmasq-dns-f5db4cc5-c4kxs" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.484740 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzcft\" (UniqueName: \"kubernetes.io/projected/dad95ced-e983-4d80-8901-8cd6537337cf-kube-api-access-fzcft\") pod \"barbican-api-5ccb469cdb-gdc7d\" (UID: \"dad95ced-e983-4d80-8901-8cd6537337cf\") " pod="openstack/barbican-api-5ccb469cdb-gdc7d" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.484778 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dad95ced-e983-4d80-8901-8cd6537337cf-config-data-custom\") pod \"barbican-api-5ccb469cdb-gdc7d\" (UID: \"dad95ced-e983-4d80-8901-8cd6537337cf\") " pod="openstack/barbican-api-5ccb469cdb-gdc7d" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.484809 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b14be83f-91df-46ae-87fd-fada733ccb95-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"b14be83f-91df-46ae-87fd-fada733ccb95\") " pod="openstack/watcher-api-0" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.484828 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b14be83f-91df-46ae-87fd-fada733ccb95-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"b14be83f-91df-46ae-87fd-fada733ccb95\") " pod="openstack/watcher-api-0" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.484869 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b14be83f-91df-46ae-87fd-fada733ccb95-logs\") pod \"watcher-api-0\" (UID: \"b14be83f-91df-46ae-87fd-fada733ccb95\") " pod="openstack/watcher-api-0" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.484905 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dad95ced-e983-4d80-8901-8cd6537337cf-logs\") pod \"barbican-api-5ccb469cdb-gdc7d\" (UID: \"dad95ced-e983-4d80-8901-8cd6537337cf\") " pod="openstack/barbican-api-5ccb469cdb-gdc7d" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.487837 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b14be83f-91df-46ae-87fd-fada733ccb95-config-data\") pod \"watcher-api-0\" (UID: \"b14be83f-91df-46ae-87fd-fada733ccb95\") " pod="openstack/watcher-api-0" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.490023 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b14be83f-91df-46ae-87fd-fada733ccb95-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"b14be83f-91df-46ae-87fd-fada733ccb95\") " pod="openstack/watcher-api-0" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.490142 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b14be83f-91df-46ae-87fd-fada733ccb95-public-tls-certs\") pod \"watcher-api-0\" (UID: \"b14be83f-91df-46ae-87fd-fada733ccb95\") " pod="openstack/watcher-api-0" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.490162 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hnf5\" (UniqueName: \"kubernetes.io/projected/b14be83f-91df-46ae-87fd-fada733ccb95-kube-api-access-4hnf5\") pod \"watcher-api-0\" (UID: \"b14be83f-91df-46ae-87fd-fada733ccb95\") " pod="openstack/watcher-api-0" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.490322 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dad95ced-e983-4d80-8901-8cd6537337cf-config-data\") pod \"barbican-api-5ccb469cdb-gdc7d\" (UID: \"dad95ced-e983-4d80-8901-8cd6537337cf\") " pod="openstack/barbican-api-5ccb469cdb-gdc7d" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.490447 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dad95ced-e983-4d80-8901-8cd6537337cf-combined-ca-bundle\") pod \"barbican-api-5ccb469cdb-gdc7d\" (UID: \"dad95ced-e983-4d80-8901-8cd6537337cf\") " pod="openstack/barbican-api-5ccb469cdb-gdc7d" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.492228 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f5db4cc5-c4kxs" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.492225 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dad95ced-e983-4d80-8901-8cd6537337cf-logs\") pod \"barbican-api-5ccb469cdb-gdc7d\" (UID: \"dad95ced-e983-4d80-8901-8cd6537337cf\") " pod="openstack/barbican-api-5ccb469cdb-gdc7d" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.498000 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dad95ced-e983-4d80-8901-8cd6537337cf-config-data-custom\") pod \"barbican-api-5ccb469cdb-gdc7d\" (UID: \"dad95ced-e983-4d80-8901-8cd6537337cf\") " pod="openstack/barbican-api-5ccb469cdb-gdc7d" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.505514 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b14be83f-91df-46ae-87fd-fada733ccb95-logs\") pod \"watcher-api-0\" (UID: \"b14be83f-91df-46ae-87fd-fada733ccb95\") " pod="openstack/watcher-api-0" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.505531 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-759c77fb4d-d9xcr" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.520982 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b14be83f-91df-46ae-87fd-fada733ccb95-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"b14be83f-91df-46ae-87fd-fada733ccb95\") " pod="openstack/watcher-api-0" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.524344 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b14be83f-91df-46ae-87fd-fada733ccb95-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"b14be83f-91df-46ae-87fd-fada733ccb95\") " pod="openstack/watcher-api-0" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.531965 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b14be83f-91df-46ae-87fd-fada733ccb95-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"b14be83f-91df-46ae-87fd-fada733ccb95\") " pod="openstack/watcher-api-0" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.532906 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dad95ced-e983-4d80-8901-8cd6537337cf-config-data\") pod \"barbican-api-5ccb469cdb-gdc7d\" (UID: \"dad95ced-e983-4d80-8901-8cd6537337cf\") " pod="openstack/barbican-api-5ccb469cdb-gdc7d" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.534527 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b14be83f-91df-46ae-87fd-fada733ccb95-public-tls-certs\") pod \"watcher-api-0\" (UID: \"b14be83f-91df-46ae-87fd-fada733ccb95\") " pod="openstack/watcher-api-0" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.535503 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzcft\" (UniqueName: \"kubernetes.io/projected/dad95ced-e983-4d80-8901-8cd6537337cf-kube-api-access-fzcft\") pod \"barbican-api-5ccb469cdb-gdc7d\" (UID: \"dad95ced-e983-4d80-8901-8cd6537337cf\") " pod="openstack/barbican-api-5ccb469cdb-gdc7d" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.538651 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b14be83f-91df-46ae-87fd-fada733ccb95-config-data\") pod \"watcher-api-0\" (UID: \"b14be83f-91df-46ae-87fd-fada733ccb95\") " pod="openstack/watcher-api-0" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.539779 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dad95ced-e983-4d80-8901-8cd6537337cf-combined-ca-bundle\") pod \"barbican-api-5ccb469cdb-gdc7d\" (UID: \"dad95ced-e983-4d80-8901-8cd6537337cf\") " pod="openstack/barbican-api-5ccb469cdb-gdc7d" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.549348 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hnf5\" (UniqueName: \"kubernetes.io/projected/b14be83f-91df-46ae-87fd-fada733ccb95-kube-api-access-4hnf5\") pod \"watcher-api-0\" (UID: \"b14be83f-91df-46ae-87fd-fada733ccb95\") " pod="openstack/watcher-api-0" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.816589 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.824295 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5ccb469cdb-gdc7d" Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.899731 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-677cfc4d77-thl6g"] Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.956095 4752 generic.go:334] "Generic (PLEG): container finished" podID="d61fa94a-753f-44b1-a435-d362b93af96d" containerID="0904992ee088d1c6628ad86d0f3e6cdf0c31c46cb3f8fa9c82e89550d31ccc08" exitCode=0 Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.956218 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76855dd879-cshkz" event={"ID":"d61fa94a-753f-44b1-a435-d362b93af96d","Type":"ContainerDied","Data":"0904992ee088d1c6628ad86d0f3e6cdf0c31c46cb3f8fa9c82e89550d31ccc08"} Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.956246 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76855dd879-cshkz" event={"ID":"d61fa94a-753f-44b1-a435-d362b93af96d","Type":"ContainerStarted","Data":"56521e0921bc7d3b5f64b31e82896687a921f2f32ab76da56e3c2f67962e4497"} Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.972096 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59bf4d5494-h8d46" event={"ID":"4cecada1-c407-4aff-83a3-9af1f6a94efa","Type":"ContainerStarted","Data":"0b5521fde68d2ae06d96c8198e16de38033ba27ecfb4f1ef9dc20f7ead048d4d"} Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.987798 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-86f449cb4d-x9x9z" event={"ID":"0a57abe3-6121-46d0-9370-58a6ef6e35fe","Type":"ContainerStarted","Data":"e71a53f74b973226a0fd8fe1e759a28ab9f35f5aa1711bc96e230922d39d35a9"} Jan 22 10:44:55 crc kubenswrapper[4752]: I0122 10:44:55.987834 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-86f449cb4d-x9x9z" event={"ID":"0a57abe3-6121-46d0-9370-58a6ef6e35fe","Type":"ContainerStarted","Data":"ecc91b8e029d88367a0ec0ef553de1586ce18ade7deaad7cf160e4f68c835e28"} Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.149004 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6796b596df-mjdp9"] Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.151419 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6796b596df-mjdp9" Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.153942 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.154167 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.175446 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6796b596df-mjdp9"] Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.211508 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-759c77fb4d-d9xcr"] Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.230941 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65e6e859-6086-42d9-b50c-f60e58d153a2-internal-tls-certs\") pod \"neutron-6796b596df-mjdp9\" (UID: \"65e6e859-6086-42d9-b50c-f60e58d153a2\") " pod="openstack/neutron-6796b596df-mjdp9" Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.231030 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/65e6e859-6086-42d9-b50c-f60e58d153a2-public-tls-certs\") pod \"neutron-6796b596df-mjdp9\" (UID: \"65e6e859-6086-42d9-b50c-f60e58d153a2\") " pod="openstack/neutron-6796b596df-mjdp9" Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.231057 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/65e6e859-6086-42d9-b50c-f60e58d153a2-httpd-config\") pod \"neutron-6796b596df-mjdp9\" (UID: \"65e6e859-6086-42d9-b50c-f60e58d153a2\") " pod="openstack/neutron-6796b596df-mjdp9" Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.231142 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65e6e859-6086-42d9-b50c-f60e58d153a2-combined-ca-bundle\") pod \"neutron-6796b596df-mjdp9\" (UID: \"65e6e859-6086-42d9-b50c-f60e58d153a2\") " pod="openstack/neutron-6796b596df-mjdp9" Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.231165 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/65e6e859-6086-42d9-b50c-f60e58d153a2-ovndb-tls-certs\") pod \"neutron-6796b596df-mjdp9\" (UID: \"65e6e859-6086-42d9-b50c-f60e58d153a2\") " pod="openstack/neutron-6796b596df-mjdp9" Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.231180 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fd7f\" (UniqueName: \"kubernetes.io/projected/65e6e859-6086-42d9-b50c-f60e58d153a2-kube-api-access-4fd7f\") pod \"neutron-6796b596df-mjdp9\" (UID: \"65e6e859-6086-42d9-b50c-f60e58d153a2\") " pod="openstack/neutron-6796b596df-mjdp9" Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.231211 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/65e6e859-6086-42d9-b50c-f60e58d153a2-config\") pod \"neutron-6796b596df-mjdp9\" (UID: \"65e6e859-6086-42d9-b50c-f60e58d153a2\") " pod="openstack/neutron-6796b596df-mjdp9" Jan 22 10:44:56 crc kubenswrapper[4752]: W0122 10:44:56.325238 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc924366_228b_413d_b8f5_9f71257add3c.slice/crio-e0195e9bb3dc75103bc29d7f96c6fab9b502f27887be8e30c176517ceedf8259 WatchSource:0}: Error finding container e0195e9bb3dc75103bc29d7f96c6fab9b502f27887be8e30c176517ceedf8259: Status 404 returned error can't find the container with id e0195e9bb3dc75103bc29d7f96c6fab9b502f27887be8e30c176517ceedf8259 Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.333817 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/65e6e859-6086-42d9-b50c-f60e58d153a2-config\") pod \"neutron-6796b596df-mjdp9\" (UID: \"65e6e859-6086-42d9-b50c-f60e58d153a2\") " pod="openstack/neutron-6796b596df-mjdp9" Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.333942 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65e6e859-6086-42d9-b50c-f60e58d153a2-internal-tls-certs\") pod \"neutron-6796b596df-mjdp9\" (UID: \"65e6e859-6086-42d9-b50c-f60e58d153a2\") " pod="openstack/neutron-6796b596df-mjdp9" Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.333988 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/65e6e859-6086-42d9-b50c-f60e58d153a2-public-tls-certs\") pod \"neutron-6796b596df-mjdp9\" (UID: \"65e6e859-6086-42d9-b50c-f60e58d153a2\") " pod="openstack/neutron-6796b596df-mjdp9" Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.334018 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/65e6e859-6086-42d9-b50c-f60e58d153a2-httpd-config\") pod \"neutron-6796b596df-mjdp9\" (UID: \"65e6e859-6086-42d9-b50c-f60e58d153a2\") " pod="openstack/neutron-6796b596df-mjdp9" Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.334097 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65e6e859-6086-42d9-b50c-f60e58d153a2-combined-ca-bundle\") pod \"neutron-6796b596df-mjdp9\" (UID: \"65e6e859-6086-42d9-b50c-f60e58d153a2\") " pod="openstack/neutron-6796b596df-mjdp9" Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.334122 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/65e6e859-6086-42d9-b50c-f60e58d153a2-ovndb-tls-certs\") pod \"neutron-6796b596df-mjdp9\" (UID: \"65e6e859-6086-42d9-b50c-f60e58d153a2\") " pod="openstack/neutron-6796b596df-mjdp9" Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.334142 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fd7f\" (UniqueName: \"kubernetes.io/projected/65e6e859-6086-42d9-b50c-f60e58d153a2-kube-api-access-4fd7f\") pod \"neutron-6796b596df-mjdp9\" (UID: \"65e6e859-6086-42d9-b50c-f60e58d153a2\") " pod="openstack/neutron-6796b596df-mjdp9" Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.354694 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/65e6e859-6086-42d9-b50c-f60e58d153a2-httpd-config\") pod \"neutron-6796b596df-mjdp9\" (UID: \"65e6e859-6086-42d9-b50c-f60e58d153a2\") " pod="openstack/neutron-6796b596df-mjdp9" Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.362667 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/65e6e859-6086-42d9-b50c-f60e58d153a2-ovndb-tls-certs\") pod \"neutron-6796b596df-mjdp9\" (UID: \"65e6e859-6086-42d9-b50c-f60e58d153a2\") " pod="openstack/neutron-6796b596df-mjdp9" Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.368207 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65e6e859-6086-42d9-b50c-f60e58d153a2-internal-tls-certs\") pod \"neutron-6796b596df-mjdp9\" (UID: \"65e6e859-6086-42d9-b50c-f60e58d153a2\") " pod="openstack/neutron-6796b596df-mjdp9" Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.371577 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65e6e859-6086-42d9-b50c-f60e58d153a2-combined-ca-bundle\") pod \"neutron-6796b596df-mjdp9\" (UID: \"65e6e859-6086-42d9-b50c-f60e58d153a2\") " pod="openstack/neutron-6796b596df-mjdp9" Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.375707 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/65e6e859-6086-42d9-b50c-f60e58d153a2-config\") pod \"neutron-6796b596df-mjdp9\" (UID: \"65e6e859-6086-42d9-b50c-f60e58d153a2\") " pod="openstack/neutron-6796b596df-mjdp9" Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.387203 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f5db4cc5-c4kxs"] Jan 22 10:44:56 crc kubenswrapper[4752]: W0122 10:44:56.401837 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce2ad46c_0d8c_4c7f_8025_89493e874a85.slice/crio-22d3d396fb8823cb94dbb8353b92a55f705d74ac6b7b673839a6d674bc180894 WatchSource:0}: Error finding container 22d3d396fb8823cb94dbb8353b92a55f705d74ac6b7b673839a6d674bc180894: Status 404 returned error can't find the container with id 22d3d396fb8823cb94dbb8353b92a55f705d74ac6b7b673839a6d674bc180894 Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.447288 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fd7f\" (UniqueName: \"kubernetes.io/projected/65e6e859-6086-42d9-b50c-f60e58d153a2-kube-api-access-4fd7f\") pod \"neutron-6796b596df-mjdp9\" (UID: \"65e6e859-6086-42d9-b50c-f60e58d153a2\") " pod="openstack/neutron-6796b596df-mjdp9" Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.455293 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/65e6e859-6086-42d9-b50c-f60e58d153a2-public-tls-certs\") pod \"neutron-6796b596df-mjdp9\" (UID: \"65e6e859-6086-42d9-b50c-f60e58d153a2\") " pod="openstack/neutron-6796b596df-mjdp9" Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.553273 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6796b596df-mjdp9" Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.774920 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.774963 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.785212 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76855dd879-cshkz" Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.893729 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.936363 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5ccb469cdb-gdc7d"] Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.976069 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d61fa94a-753f-44b1-a435-d362b93af96d-dns-swift-storage-0\") pod \"d61fa94a-753f-44b1-a435-d362b93af96d\" (UID: \"d61fa94a-753f-44b1-a435-d362b93af96d\") " Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.976432 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d61fa94a-753f-44b1-a435-d362b93af96d-dns-svc\") pod \"d61fa94a-753f-44b1-a435-d362b93af96d\" (UID: \"d61fa94a-753f-44b1-a435-d362b93af96d\") " Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.976459 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d61fa94a-753f-44b1-a435-d362b93af96d-config\") pod \"d61fa94a-753f-44b1-a435-d362b93af96d\" (UID: \"d61fa94a-753f-44b1-a435-d362b93af96d\") " Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.976501 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d61fa94a-753f-44b1-a435-d362b93af96d-ovsdbserver-nb\") pod \"d61fa94a-753f-44b1-a435-d362b93af96d\" (UID: \"d61fa94a-753f-44b1-a435-d362b93af96d\") " Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.976546 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d61fa94a-753f-44b1-a435-d362b93af96d-ovsdbserver-sb\") pod \"d61fa94a-753f-44b1-a435-d362b93af96d\" (UID: \"d61fa94a-753f-44b1-a435-d362b93af96d\") " Jan 22 10:44:56 crc kubenswrapper[4752]: I0122 10:44:56.976610 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vb9w\" (UniqueName: \"kubernetes.io/projected/d61fa94a-753f-44b1-a435-d362b93af96d-kube-api-access-4vb9w\") pod \"d61fa94a-753f-44b1-a435-d362b93af96d\" (UID: \"d61fa94a-753f-44b1-a435-d362b93af96d\") " Jan 22 10:44:56 crc kubenswrapper[4752]: W0122 10:44:56.986543 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddad95ced_e983_4d80_8901_8cd6537337cf.slice/crio-1c6df7deb156273057f4b8aec5fcd1f43c501ac54e37d38ecfcc8a67409022d6 WatchSource:0}: Error finding container 1c6df7deb156273057f4b8aec5fcd1f43c501ac54e37d38ecfcc8a67409022d6: Status 404 returned error can't find the container with id 1c6df7deb156273057f4b8aec5fcd1f43c501ac54e37d38ecfcc8a67409022d6 Jan 22 10:44:57 crc kubenswrapper[4752]: I0122 10:44:57.028974 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d61fa94a-753f-44b1-a435-d362b93af96d-kube-api-access-4vb9w" (OuterVolumeSpecName: "kube-api-access-4vb9w") pod "d61fa94a-753f-44b1-a435-d362b93af96d" (UID: "d61fa94a-753f-44b1-a435-d362b93af96d"). InnerVolumeSpecName "kube-api-access-4vb9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:44:57 crc kubenswrapper[4752]: I0122 10:44:57.047028 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76855dd879-cshkz" event={"ID":"d61fa94a-753f-44b1-a435-d362b93af96d","Type":"ContainerDied","Data":"56521e0921bc7d3b5f64b31e82896687a921f2f32ab76da56e3c2f67962e4497"} Jan 22 10:44:57 crc kubenswrapper[4752]: I0122 10:44:57.047088 4752 scope.go:117] "RemoveContainer" containerID="0904992ee088d1c6628ad86d0f3e6cdf0c31c46cb3f8fa9c82e89550d31ccc08" Jan 22 10:44:57 crc kubenswrapper[4752]: I0122 10:44:57.047207 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76855dd879-cshkz" Jan 22 10:44:57 crc kubenswrapper[4752]: I0122 10:44:57.064116 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-677cfc4d77-thl6g" event={"ID":"2e64c278-d203-43c6-9899-8af0a062c3da","Type":"ContainerStarted","Data":"681df9e70991568b6b988209cd70668f4f212a490cb1c3f49b77696cc467f01e"} Jan 22 10:44:57 crc kubenswrapper[4752]: I0122 10:44:57.088920 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vb9w\" (UniqueName: \"kubernetes.io/projected/d61fa94a-753f-44b1-a435-d362b93af96d-kube-api-access-4vb9w\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:57 crc kubenswrapper[4752]: I0122 10:44:57.101928 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59bf4d5494-h8d46" event={"ID":"4cecada1-c407-4aff-83a3-9af1f6a94efa","Type":"ContainerStarted","Data":"d18f57d26752f629bc27360b054caa10c4aa7226f020ad68c748685e6f8cd063"} Jan 22 10:44:57 crc kubenswrapper[4752]: I0122 10:44:57.197816 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01b27ce5-269a-4d59-b7bf-51a805357d4a" path="/var/lib/kubelet/pods/01b27ce5-269a-4d59-b7bf-51a805357d4a/volumes" Jan 22 10:44:57 crc kubenswrapper[4752]: I0122 10:44:57.346941 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d61fa94a-753f-44b1-a435-d362b93af96d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d61fa94a-753f-44b1-a435-d362b93af96d" (UID: "d61fa94a-753f-44b1-a435-d362b93af96d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:44:57 crc kubenswrapper[4752]: I0122 10:44:57.397954 4752 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d61fa94a-753f-44b1-a435-d362b93af96d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:57 crc kubenswrapper[4752]: I0122 10:44:57.534534 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d61fa94a-753f-44b1-a435-d362b93af96d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d61fa94a-753f-44b1-a435-d362b93af96d" (UID: "d61fa94a-753f-44b1-a435-d362b93af96d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:44:57 crc kubenswrapper[4752]: I0122 10:44:57.564547 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d61fa94a-753f-44b1-a435-d362b93af96d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d61fa94a-753f-44b1-a435-d362b93af96d" (UID: "d61fa94a-753f-44b1-a435-d362b93af96d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:44:57 crc kubenswrapper[4752]: I0122 10:44:57.589863 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d61fa94a-753f-44b1-a435-d362b93af96d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d61fa94a-753f-44b1-a435-d362b93af96d" (UID: "d61fa94a-753f-44b1-a435-d362b93af96d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:44:57 crc kubenswrapper[4752]: I0122 10:44:57.594378 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d61fa94a-753f-44b1-a435-d362b93af96d-config" (OuterVolumeSpecName: "config") pod "d61fa94a-753f-44b1-a435-d362b93af96d" (UID: "d61fa94a-753f-44b1-a435-d362b93af96d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:44:57 crc kubenswrapper[4752]: I0122 10:44:57.601089 4752 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d61fa94a-753f-44b1-a435-d362b93af96d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:57 crc kubenswrapper[4752]: I0122 10:44:57.601128 4752 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d61fa94a-753f-44b1-a435-d362b93af96d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:57 crc kubenswrapper[4752]: I0122 10:44:57.601140 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d61fa94a-753f-44b1-a435-d362b93af96d-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:57 crc kubenswrapper[4752]: I0122 10:44:57.601150 4752 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d61fa94a-753f-44b1-a435-d362b93af96d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 10:44:57 crc kubenswrapper[4752]: I0122 10:44:57.838379 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 22 10:44:57 crc kubenswrapper[4752]: I0122 10:44:57.838427 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6796b596df-mjdp9"] Jan 22 10:44:57 crc kubenswrapper[4752]: I0122 10:44:57.838508 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 22 10:44:57 crc kubenswrapper[4752]: I0122 10:44:57.838522 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b14be83f-91df-46ae-87fd-fada733ccb95","Type":"ContainerStarted","Data":"7ea37b552934274307be47b818f0225dda7df2913bb583ab8ae9e240bfac1573"} Jan 22 10:44:57 crc kubenswrapper[4752]: I0122 10:44:57.838543 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-759c77fb4d-d9xcr" event={"ID":"bc924366-228b-413d-b8f5-9f71257add3c","Type":"ContainerStarted","Data":"e0195e9bb3dc75103bc29d7f96c6fab9b502f27887be8e30c176517ceedf8259"} Jan 22 10:44:57 crc kubenswrapper[4752]: I0122 10:44:57.838559 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f5db4cc5-c4kxs" event={"ID":"ce2ad46c-0d8c-4c7f-8025-89493e874a85","Type":"ContainerStarted","Data":"22d3d396fb8823cb94dbb8353b92a55f705d74ac6b7b673839a6d674bc180894"} Jan 22 10:44:57 crc kubenswrapper[4752]: I0122 10:44:57.839158 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 22 10:44:57 crc kubenswrapper[4752]: I0122 10:44:57.840008 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 22 10:44:57 crc kubenswrapper[4752]: I0122 10:44:57.958419 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76855dd879-cshkz"] Jan 22 10:44:57 crc kubenswrapper[4752]: I0122 10:44:57.975215 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76855dd879-cshkz"] Jan 22 10:44:58 crc kubenswrapper[4752]: I0122 10:44:58.298961 4752 generic.go:334] "Generic (PLEG): container finished" podID="806e176d-686f-4523-822c-f519f6a6076d" containerID="f20b462d0c76ace0ee5befef86ad31d99bd028e8380e59b067ace82adc442b92" exitCode=1 Jan 22 10:44:58 crc kubenswrapper[4752]: I0122 10:44:58.299064 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"806e176d-686f-4523-822c-f519f6a6076d","Type":"ContainerDied","Data":"f20b462d0c76ace0ee5befef86ad31d99bd028e8380e59b067ace82adc442b92"} Jan 22 10:44:58 crc kubenswrapper[4752]: I0122 10:44:58.300940 4752 scope.go:117] "RemoveContainer" containerID="f20b462d0c76ace0ee5befef86ad31d99bd028e8380e59b067ace82adc442b92" Jan 22 10:44:58 crc kubenswrapper[4752]: I0122 10:44:58.302071 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6796b596df-mjdp9" event={"ID":"65e6e859-6086-42d9-b50c-f60e58d153a2","Type":"ContainerStarted","Data":"c9a4583c26f01408d72af565466ccf827f542fb76103046866752ac540c12721"} Jan 22 10:44:58 crc kubenswrapper[4752]: I0122 10:44:58.302409 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6796b596df-mjdp9" event={"ID":"65e6e859-6086-42d9-b50c-f60e58d153a2","Type":"ContainerStarted","Data":"7fff605ddae7eb941b924431625ce889a443860fc479bed42dc9672291665224"} Jan 22 10:44:58 crc kubenswrapper[4752]: I0122 10:44:58.320152 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b14be83f-91df-46ae-87fd-fada733ccb95","Type":"ContainerStarted","Data":"2ce47eb15b9ffac1d5402b32d0a4e50059ffe2c01dd04333ad472145a52544e2"} Jan 22 10:44:58 crc kubenswrapper[4752]: I0122 10:44:58.364906 4752 generic.go:334] "Generic (PLEG): container finished" podID="ce2ad46c-0d8c-4c7f-8025-89493e874a85" containerID="44b6b877267f995bd3d6c2a94cc6de7dbb34e13e3602a48db0489dd36f2f9062" exitCode=0 Jan 22 10:44:58 crc kubenswrapper[4752]: I0122 10:44:58.364985 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f5db4cc5-c4kxs" event={"ID":"ce2ad46c-0d8c-4c7f-8025-89493e874a85","Type":"ContainerDied","Data":"44b6b877267f995bd3d6c2a94cc6de7dbb34e13e3602a48db0489dd36f2f9062"} Jan 22 10:44:58 crc kubenswrapper[4752]: I0122 10:44:58.404295 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59bf4d5494-h8d46" event={"ID":"4cecada1-c407-4aff-83a3-9af1f6a94efa","Type":"ContainerStarted","Data":"4fad7817e4352e2c73685b768f354419e94b86c1af96f13854a329ab0fb5f154"} Jan 22 10:44:58 crc kubenswrapper[4752]: I0122 10:44:58.404980 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-59bf4d5494-h8d46" Jan 22 10:44:58 crc kubenswrapper[4752]: I0122 10:44:58.432772 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-59bf4d5494-h8d46" podStartSLOduration=5.432756138 podStartE2EDuration="5.432756138s" podCreationTimestamp="2026-01-22 10:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:44:58.432187343 +0000 UTC m=+1177.662130251" watchObservedRunningTime="2026-01-22 10:44:58.432756138 +0000 UTC m=+1177.662699046" Jan 22 10:44:58 crc kubenswrapper[4752]: I0122 10:44:58.440228 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5ccb469cdb-gdc7d" event={"ID":"dad95ced-e983-4d80-8901-8cd6537337cf","Type":"ContainerStarted","Data":"851e342a903807eb9a64f951507bbc2f5895b4d331350d6874a0cb759d075578"} Jan 22 10:44:58 crc kubenswrapper[4752]: I0122 10:44:58.440269 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5ccb469cdb-gdc7d" event={"ID":"dad95ced-e983-4d80-8901-8cd6537337cf","Type":"ContainerStarted","Data":"1c6df7deb156273057f4b8aec5fcd1f43c501ac54e37d38ecfcc8a67409022d6"} Jan 22 10:44:58 crc kubenswrapper[4752]: I0122 10:44:58.448143 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-86f449cb4d-x9x9z" event={"ID":"0a57abe3-6121-46d0-9370-58a6ef6e35fe","Type":"ContainerStarted","Data":"e646bcf2565daca8320134a685d0f30bc3a054fda8047918a2e87766b087f73f"} Jan 22 10:44:58 crc kubenswrapper[4752]: I0122 10:44:58.448387 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-86f449cb4d-x9x9z" Jan 22 10:44:58 crc kubenswrapper[4752]: I0122 10:44:58.448421 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-86f449cb4d-x9x9z" Jan 22 10:44:58 crc kubenswrapper[4752]: I0122 10:44:58.496706 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-86f449cb4d-x9x9z" podStartSLOduration=5.496684186 podStartE2EDuration="5.496684186s" podCreationTimestamp="2026-01-22 10:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:44:58.48901039 +0000 UTC m=+1177.718953298" watchObservedRunningTime="2026-01-22 10:44:58.496684186 +0000 UTC m=+1177.726627094" Jan 22 10:44:58 crc kubenswrapper[4752]: I0122 10:44:58.512183 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.113649 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d61fa94a-753f-44b1-a435-d362b93af96d" path="/var/lib/kubelet/pods/d61fa94a-753f-44b1-a435-d362b93af96d/volumes" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.459543 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"806e176d-686f-4523-822c-f519f6a6076d","Type":"ContainerStarted","Data":"850c9676da86d08770647f31e68bb12eb2beed371f768b333eb96fbd8db40f16"} Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.465028 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6796b596df-mjdp9" event={"ID":"65e6e859-6086-42d9-b50c-f60e58d153a2","Type":"ContainerStarted","Data":"c36dde23eebd0ef6ca6cd9bc7fb52c139aaffdb0f3e64801397545fffd7e2912"} Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.465179 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6796b596df-mjdp9" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.467935 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b14be83f-91df-46ae-87fd-fada733ccb95","Type":"ContainerStarted","Data":"7f31b9ce6cdbd0fdd33b09c1e19730943c945483b5986bb072342e19cb166028"} Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.468802 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.484268 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f5db4cc5-c4kxs" event={"ID":"ce2ad46c-0d8c-4c7f-8025-89493e874a85","Type":"ContainerStarted","Data":"ab7594ee27c101cbc148576b984b28d92e7021f4d1ccef5b829315b0db00d6b7"} Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.484934 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f5db4cc5-c4kxs" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.509673 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5ccb469cdb-gdc7d" event={"ID":"dad95ced-e983-4d80-8901-8cd6537337cf","Type":"ContainerStarted","Data":"e703b6f9a9799ba9383ee40ee47c862b1adc85d96047aba84bd22897846565f3"} Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.509714 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5ccb469cdb-gdc7d" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.510488 4752 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.510505 4752 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.511403 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5ccb469cdb-gdc7d" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.512257 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6796b596df-mjdp9" podStartSLOduration=3.51222579 podStartE2EDuration="3.51222579s" podCreationTimestamp="2026-01-22 10:44:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:44:59.507166041 +0000 UTC m=+1178.737108949" watchObservedRunningTime="2026-01-22 10:44:59.51222579 +0000 UTC m=+1178.742168688" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.550647 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=5.550623278 podStartE2EDuration="5.550623278s" podCreationTimestamp="2026-01-22 10:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:44:59.544273986 +0000 UTC m=+1178.774216894" watchObservedRunningTime="2026-01-22 10:44:59.550623278 +0000 UTC m=+1178.780566186" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.631631 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-765775556-hgf76"] Jan 22 10:44:59 crc kubenswrapper[4752]: E0122 10:44:59.632267 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d61fa94a-753f-44b1-a435-d362b93af96d" containerName="init" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.632283 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="d61fa94a-753f-44b1-a435-d362b93af96d" containerName="init" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.632536 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="d61fa94a-753f-44b1-a435-d362b93af96d" containerName="init" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.641317 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-765775556-hgf76" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.647441 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.647799 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.670472 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-765775556-hgf76"] Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.676661 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5ccb469cdb-gdc7d" podStartSLOduration=4.676633527 podStartE2EDuration="4.676633527s" podCreationTimestamp="2026-01-22 10:44:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:44:59.585440495 +0000 UTC m=+1178.815383413" watchObservedRunningTime="2026-01-22 10:44:59.676633527 +0000 UTC m=+1178.906576435" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.686542 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f5db4cc5-c4kxs" podStartSLOduration=5.686516899 podStartE2EDuration="5.686516899s" podCreationTimestamp="2026-01-22 10:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:44:59.63394896 +0000 UTC m=+1178.863891878" watchObservedRunningTime="2026-01-22 10:44:59.686516899 +0000 UTC m=+1178.916459807" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.687229 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77f54c12-342d-4dc4-938b-baf736ba3664-public-tls-certs\") pod \"barbican-api-765775556-hgf76\" (UID: \"77f54c12-342d-4dc4-938b-baf736ba3664\") " pod="openstack/barbican-api-765775556-hgf76" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.687395 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77f54c12-342d-4dc4-938b-baf736ba3664-combined-ca-bundle\") pod \"barbican-api-765775556-hgf76\" (UID: \"77f54c12-342d-4dc4-938b-baf736ba3664\") " pod="openstack/barbican-api-765775556-hgf76" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.687618 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77f54c12-342d-4dc4-938b-baf736ba3664-logs\") pod \"barbican-api-765775556-hgf76\" (UID: \"77f54c12-342d-4dc4-938b-baf736ba3664\") " pod="openstack/barbican-api-765775556-hgf76" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.687654 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77f54c12-342d-4dc4-938b-baf736ba3664-internal-tls-certs\") pod \"barbican-api-765775556-hgf76\" (UID: \"77f54c12-342d-4dc4-938b-baf736ba3664\") " pod="openstack/barbican-api-765775556-hgf76" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.687681 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77f54c12-342d-4dc4-938b-baf736ba3664-config-data\") pod \"barbican-api-765775556-hgf76\" (UID: \"77f54c12-342d-4dc4-938b-baf736ba3664\") " pod="openstack/barbican-api-765775556-hgf76" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.687730 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77f54c12-342d-4dc4-938b-baf736ba3664-config-data-custom\") pod \"barbican-api-765775556-hgf76\" (UID: \"77f54c12-342d-4dc4-938b-baf736ba3664\") " pod="openstack/barbican-api-765775556-hgf76" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.688004 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs9vv\" (UniqueName: \"kubernetes.io/projected/77f54c12-342d-4dc4-938b-baf736ba3664-kube-api-access-cs9vv\") pod \"barbican-api-765775556-hgf76\" (UID: \"77f54c12-342d-4dc4-938b-baf736ba3664\") " pod="openstack/barbican-api-765775556-hgf76" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.790538 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77f54c12-342d-4dc4-938b-baf736ba3664-logs\") pod \"barbican-api-765775556-hgf76\" (UID: \"77f54c12-342d-4dc4-938b-baf736ba3664\") " pod="openstack/barbican-api-765775556-hgf76" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.790796 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77f54c12-342d-4dc4-938b-baf736ba3664-internal-tls-certs\") pod \"barbican-api-765775556-hgf76\" (UID: \"77f54c12-342d-4dc4-938b-baf736ba3664\") " pod="openstack/barbican-api-765775556-hgf76" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.790818 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77f54c12-342d-4dc4-938b-baf736ba3664-config-data\") pod \"barbican-api-765775556-hgf76\" (UID: \"77f54c12-342d-4dc4-938b-baf736ba3664\") " pod="openstack/barbican-api-765775556-hgf76" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.790843 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77f54c12-342d-4dc4-938b-baf736ba3664-config-data-custom\") pod \"barbican-api-765775556-hgf76\" (UID: \"77f54c12-342d-4dc4-938b-baf736ba3664\") " pod="openstack/barbican-api-765775556-hgf76" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.790886 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs9vv\" (UniqueName: \"kubernetes.io/projected/77f54c12-342d-4dc4-938b-baf736ba3664-kube-api-access-cs9vv\") pod \"barbican-api-765775556-hgf76\" (UID: \"77f54c12-342d-4dc4-938b-baf736ba3664\") " pod="openstack/barbican-api-765775556-hgf76" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.790910 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77f54c12-342d-4dc4-938b-baf736ba3664-public-tls-certs\") pod \"barbican-api-765775556-hgf76\" (UID: \"77f54c12-342d-4dc4-938b-baf736ba3664\") " pod="openstack/barbican-api-765775556-hgf76" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.790957 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77f54c12-342d-4dc4-938b-baf736ba3664-combined-ca-bundle\") pod \"barbican-api-765775556-hgf76\" (UID: \"77f54c12-342d-4dc4-938b-baf736ba3664\") " pod="openstack/barbican-api-765775556-hgf76" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.791067 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77f54c12-342d-4dc4-938b-baf736ba3664-logs\") pod \"barbican-api-765775556-hgf76\" (UID: \"77f54c12-342d-4dc4-938b-baf736ba3664\") " pod="openstack/barbican-api-765775556-hgf76" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.800515 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77f54c12-342d-4dc4-938b-baf736ba3664-combined-ca-bundle\") pod \"barbican-api-765775556-hgf76\" (UID: \"77f54c12-342d-4dc4-938b-baf736ba3664\") " pod="openstack/barbican-api-765775556-hgf76" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.802995 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77f54c12-342d-4dc4-938b-baf736ba3664-internal-tls-certs\") pod \"barbican-api-765775556-hgf76\" (UID: \"77f54c12-342d-4dc4-938b-baf736ba3664\") " pod="openstack/barbican-api-765775556-hgf76" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.807817 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77f54c12-342d-4dc4-938b-baf736ba3664-config-data\") pod \"barbican-api-765775556-hgf76\" (UID: \"77f54c12-342d-4dc4-938b-baf736ba3664\") " pod="openstack/barbican-api-765775556-hgf76" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.809417 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77f54c12-342d-4dc4-938b-baf736ba3664-config-data-custom\") pod \"barbican-api-765775556-hgf76\" (UID: \"77f54c12-342d-4dc4-938b-baf736ba3664\") " pod="openstack/barbican-api-765775556-hgf76" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.811804 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77f54c12-342d-4dc4-938b-baf736ba3664-public-tls-certs\") pod \"barbican-api-765775556-hgf76\" (UID: \"77f54c12-342d-4dc4-938b-baf736ba3664\") " pod="openstack/barbican-api-765775556-hgf76" Jan 22 10:44:59 crc kubenswrapper[4752]: I0122 10:44:59.819913 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs9vv\" (UniqueName: \"kubernetes.io/projected/77f54c12-342d-4dc4-938b-baf736ba3664-kube-api-access-cs9vv\") pod \"barbican-api-765775556-hgf76\" (UID: \"77f54c12-342d-4dc4-938b-baf736ba3664\") " pod="openstack/barbican-api-765775556-hgf76" Jan 22 10:45:00 crc kubenswrapper[4752]: I0122 10:45:00.011763 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-765775556-hgf76" Jan 22 10:45:00 crc kubenswrapper[4752]: I0122 10:45:00.174309 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484645-zs8tk"] Jan 22 10:45:00 crc kubenswrapper[4752]: I0122 10:45:00.177063 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484645-zs8tk" Jan 22 10:45:00 crc kubenswrapper[4752]: I0122 10:45:00.179838 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 10:45:00 crc kubenswrapper[4752]: I0122 10:45:00.180092 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 10:45:00 crc kubenswrapper[4752]: I0122 10:45:00.191411 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484645-zs8tk"] Jan 22 10:45:00 crc kubenswrapper[4752]: I0122 10:45:00.203391 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b09c1a7-b13b-45f2-908c-9f3459653f0e-config-volume\") pod \"collect-profiles-29484645-zs8tk\" (UID: \"3b09c1a7-b13b-45f2-908c-9f3459653f0e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484645-zs8tk" Jan 22 10:45:00 crc kubenswrapper[4752]: I0122 10:45:00.203433 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqrdg\" (UniqueName: \"kubernetes.io/projected/3b09c1a7-b13b-45f2-908c-9f3459653f0e-kube-api-access-dqrdg\") pod \"collect-profiles-29484645-zs8tk\" (UID: \"3b09c1a7-b13b-45f2-908c-9f3459653f0e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484645-zs8tk" Jan 22 10:45:00 crc kubenswrapper[4752]: I0122 10:45:00.203457 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b09c1a7-b13b-45f2-908c-9f3459653f0e-secret-volume\") pod \"collect-profiles-29484645-zs8tk\" (UID: \"3b09c1a7-b13b-45f2-908c-9f3459653f0e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484645-zs8tk" Jan 22 10:45:00 crc kubenswrapper[4752]: I0122 10:45:00.305908 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b09c1a7-b13b-45f2-908c-9f3459653f0e-config-volume\") pod \"collect-profiles-29484645-zs8tk\" (UID: \"3b09c1a7-b13b-45f2-908c-9f3459653f0e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484645-zs8tk" Jan 22 10:45:00 crc kubenswrapper[4752]: I0122 10:45:00.306480 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqrdg\" (UniqueName: \"kubernetes.io/projected/3b09c1a7-b13b-45f2-908c-9f3459653f0e-kube-api-access-dqrdg\") pod \"collect-profiles-29484645-zs8tk\" (UID: \"3b09c1a7-b13b-45f2-908c-9f3459653f0e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484645-zs8tk" Jan 22 10:45:00 crc kubenswrapper[4752]: I0122 10:45:00.306510 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b09c1a7-b13b-45f2-908c-9f3459653f0e-secret-volume\") pod \"collect-profiles-29484645-zs8tk\" (UID: \"3b09c1a7-b13b-45f2-908c-9f3459653f0e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484645-zs8tk" Jan 22 10:45:00 crc kubenswrapper[4752]: I0122 10:45:00.306819 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b09c1a7-b13b-45f2-908c-9f3459653f0e-config-volume\") pod \"collect-profiles-29484645-zs8tk\" (UID: \"3b09c1a7-b13b-45f2-908c-9f3459653f0e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484645-zs8tk" Jan 22 10:45:00 crc kubenswrapper[4752]: I0122 10:45:00.322747 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b09c1a7-b13b-45f2-908c-9f3459653f0e-secret-volume\") pod \"collect-profiles-29484645-zs8tk\" (UID: \"3b09c1a7-b13b-45f2-908c-9f3459653f0e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484645-zs8tk" Jan 22 10:45:00 crc kubenswrapper[4752]: I0122 10:45:00.331707 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqrdg\" (UniqueName: \"kubernetes.io/projected/3b09c1a7-b13b-45f2-908c-9f3459653f0e-kube-api-access-dqrdg\") pod \"collect-profiles-29484645-zs8tk\" (UID: \"3b09c1a7-b13b-45f2-908c-9f3459653f0e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484645-zs8tk" Jan 22 10:45:00 crc kubenswrapper[4752]: I0122 10:45:00.355595 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-56b8d5fdb8-7gp4n" podUID="2af09aa4-9ce8-411f-8634-ac7eb7909555" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.165:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.165:8443: connect: connection refused" Jan 22 10:45:00 crc kubenswrapper[4752]: I0122 10:45:00.505620 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484645-zs8tk" Jan 22 10:45:00 crc kubenswrapper[4752]: I0122 10:45:00.654978 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-765775556-hgf76"] Jan 22 10:45:00 crc kubenswrapper[4752]: I0122 10:45:00.819082 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 22 10:45:00 crc kubenswrapper[4752]: I0122 10:45:00.918947 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 22 10:45:00 crc kubenswrapper[4752]: I0122 10:45:00.947615 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 22 10:45:01 crc kubenswrapper[4752]: I0122 10:45:01.035282 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484645-zs8tk"] Jan 22 10:45:01 crc kubenswrapper[4752]: I0122 10:45:01.550146 4752 generic.go:334] "Generic (PLEG): container finished" podID="3b09c1a7-b13b-45f2-908c-9f3459653f0e" containerID="497c45bb9e441b8f3b1ee9f96ecfac6c5f1e55e78c2ee19d51bf48d82410747c" exitCode=0 Jan 22 10:45:01 crc kubenswrapper[4752]: I0122 10:45:01.550230 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484645-zs8tk" event={"ID":"3b09c1a7-b13b-45f2-908c-9f3459653f0e","Type":"ContainerDied","Data":"497c45bb9e441b8f3b1ee9f96ecfac6c5f1e55e78c2ee19d51bf48d82410747c"} Jan 22 10:45:01 crc kubenswrapper[4752]: I0122 10:45:01.550261 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484645-zs8tk" event={"ID":"3b09c1a7-b13b-45f2-908c-9f3459653f0e","Type":"ContainerStarted","Data":"9e189552ab9815ae154ce233a4b84564319d149889eaa3d0703437be0b2c1110"} Jan 22 10:45:01 crc kubenswrapper[4752]: I0122 10:45:01.554031 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-765775556-hgf76" event={"ID":"77f54c12-342d-4dc4-938b-baf736ba3664","Type":"ContainerStarted","Data":"c03650eb70881f811ed4bbe8269a8445d4cd92118566666f6e30c512ddeac308"} Jan 22 10:45:01 crc kubenswrapper[4752]: I0122 10:45:01.554097 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-765775556-hgf76" event={"ID":"77f54c12-342d-4dc4-938b-baf736ba3664","Type":"ContainerStarted","Data":"e7adfc3d457366f25231b2edcf6238bf8708a6d7fbf691ba454d7d45f61f1929"} Jan 22 10:45:01 crc kubenswrapper[4752]: I0122 10:45:01.554456 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 22 10:45:01 crc kubenswrapper[4752]: I0122 10:45:01.554546 4752 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 10:45:01 crc kubenswrapper[4752]: I0122 10:45:01.620922 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 22 10:45:02 crc kubenswrapper[4752]: I0122 10:45:02.530370 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 22 10:45:02 crc kubenswrapper[4752]: I0122 10:45:02.588735 4752 generic.go:334] "Generic (PLEG): container finished" podID="806e176d-686f-4523-822c-f519f6a6076d" containerID="850c9676da86d08770647f31e68bb12eb2beed371f768b333eb96fbd8db40f16" exitCode=1 Jan 22 10:45:02 crc kubenswrapper[4752]: I0122 10:45:02.590034 4752 scope.go:117] "RemoveContainer" containerID="850c9676da86d08770647f31e68bb12eb2beed371f768b333eb96fbd8db40f16" Jan 22 10:45:02 crc kubenswrapper[4752]: I0122 10:45:02.590067 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"806e176d-686f-4523-822c-f519f6a6076d","Type":"ContainerDied","Data":"850c9676da86d08770647f31e68bb12eb2beed371f768b333eb96fbd8db40f16"} Jan 22 10:45:02 crc kubenswrapper[4752]: I0122 10:45:02.590139 4752 scope.go:117] "RemoveContainer" containerID="f20b462d0c76ace0ee5befef86ad31d99bd028e8380e59b067ace82adc442b92" Jan 22 10:45:02 crc kubenswrapper[4752]: E0122 10:45:02.590245 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(806e176d-686f-4523-822c-f519f6a6076d)\"" pod="openstack/watcher-decision-engine-0" podUID="806e176d-686f-4523-822c-f519f6a6076d" Jan 22 10:45:02 crc kubenswrapper[4752]: I0122 10:45:02.664168 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 22 10:45:02 crc kubenswrapper[4752]: I0122 10:45:02.664302 4752 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 10:45:02 crc kubenswrapper[4752]: I0122 10:45:02.737486 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 22 10:45:02 crc kubenswrapper[4752]: I0122 10:45:02.749672 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 22 10:45:02 crc kubenswrapper[4752]: I0122 10:45:02.996206 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7c77556c9d-7cqmw" Jan 22 10:45:03 crc kubenswrapper[4752]: I0122 10:45:03.383561 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484645-zs8tk" Jan 22 10:45:03 crc kubenswrapper[4752]: I0122 10:45:03.512113 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqrdg\" (UniqueName: \"kubernetes.io/projected/3b09c1a7-b13b-45f2-908c-9f3459653f0e-kube-api-access-dqrdg\") pod \"3b09c1a7-b13b-45f2-908c-9f3459653f0e\" (UID: \"3b09c1a7-b13b-45f2-908c-9f3459653f0e\") " Jan 22 10:45:03 crc kubenswrapper[4752]: I0122 10:45:03.512183 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b09c1a7-b13b-45f2-908c-9f3459653f0e-secret-volume\") pod \"3b09c1a7-b13b-45f2-908c-9f3459653f0e\" (UID: \"3b09c1a7-b13b-45f2-908c-9f3459653f0e\") " Jan 22 10:45:03 crc kubenswrapper[4752]: I0122 10:45:03.512388 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b09c1a7-b13b-45f2-908c-9f3459653f0e-config-volume\") pod \"3b09c1a7-b13b-45f2-908c-9f3459653f0e\" (UID: \"3b09c1a7-b13b-45f2-908c-9f3459653f0e\") " Jan 22 10:45:03 crc kubenswrapper[4752]: I0122 10:45:03.513164 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b09c1a7-b13b-45f2-908c-9f3459653f0e-config-volume" (OuterVolumeSpecName: "config-volume") pod "3b09c1a7-b13b-45f2-908c-9f3459653f0e" (UID: "3b09c1a7-b13b-45f2-908c-9f3459653f0e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:45:03 crc kubenswrapper[4752]: I0122 10:45:03.519681 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b09c1a7-b13b-45f2-908c-9f3459653f0e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3b09c1a7-b13b-45f2-908c-9f3459653f0e" (UID: "3b09c1a7-b13b-45f2-908c-9f3459653f0e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:03 crc kubenswrapper[4752]: I0122 10:45:03.521082 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b09c1a7-b13b-45f2-908c-9f3459653f0e-kube-api-access-dqrdg" (OuterVolumeSpecName: "kube-api-access-dqrdg") pod "3b09c1a7-b13b-45f2-908c-9f3459653f0e" (UID: "3b09c1a7-b13b-45f2-908c-9f3459653f0e"). InnerVolumeSpecName "kube-api-access-dqrdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:45:03 crc kubenswrapper[4752]: I0122 10:45:03.603300 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484645-zs8tk" event={"ID":"3b09c1a7-b13b-45f2-908c-9f3459653f0e","Type":"ContainerDied","Data":"9e189552ab9815ae154ce233a4b84564319d149889eaa3d0703437be0b2c1110"} Jan 22 10:45:03 crc kubenswrapper[4752]: I0122 10:45:03.603359 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e189552ab9815ae154ce233a4b84564319d149889eaa3d0703437be0b2c1110" Jan 22 10:45:03 crc kubenswrapper[4752]: I0122 10:45:03.603961 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484645-zs8tk" Jan 22 10:45:03 crc kubenswrapper[4752]: I0122 10:45:03.606410 4752 scope.go:117] "RemoveContainer" containerID="850c9676da86d08770647f31e68bb12eb2beed371f768b333eb96fbd8db40f16" Jan 22 10:45:03 crc kubenswrapper[4752]: I0122 10:45:03.616575 4752 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b09c1a7-b13b-45f2-908c-9f3459653f0e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:03 crc kubenswrapper[4752]: I0122 10:45:03.616615 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqrdg\" (UniqueName: \"kubernetes.io/projected/3b09c1a7-b13b-45f2-908c-9f3459653f0e-kube-api-access-dqrdg\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:03 crc kubenswrapper[4752]: I0122 10:45:03.616626 4752 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b09c1a7-b13b-45f2-908c-9f3459653f0e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:03 crc kubenswrapper[4752]: E0122 10:45:03.617503 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(806e176d-686f-4523-822c-f519f6a6076d)\"" pod="openstack/watcher-decision-engine-0" podUID="806e176d-686f-4523-822c-f519f6a6076d" Jan 22 10:45:04 crc kubenswrapper[4752]: I0122 10:45:04.615959 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-759c77fb4d-d9xcr" event={"ID":"bc924366-228b-413d-b8f5-9f71257add3c","Type":"ContainerStarted","Data":"a7ff65aba84045807d9a2e35fb746fed5f2a6341fbd1576c94fadfc9d357a21e"} Jan 22 10:45:04 crc kubenswrapper[4752]: I0122 10:45:04.617316 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-677cfc4d77-thl6g" event={"ID":"2e64c278-d203-43c6-9899-8af0a062c3da","Type":"ContainerStarted","Data":"925859432c3561c073e31e0923a942f3177d39c69d1261a400b5f76f46b4a4e1"} Jan 22 10:45:04 crc kubenswrapper[4752]: I0122 10:45:04.619073 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-765775556-hgf76" event={"ID":"77f54c12-342d-4dc4-938b-baf736ba3664","Type":"ContainerStarted","Data":"509f08754d9b073e4dc17537ff11e650fc84a67b0aaa462b556c1e60985d71e0"} Jan 22 10:45:04 crc kubenswrapper[4752]: I0122 10:45:04.620072 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-765775556-hgf76" Jan 22 10:45:04 crc kubenswrapper[4752]: I0122 10:45:04.620179 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-765775556-hgf76" Jan 22 10:45:04 crc kubenswrapper[4752]: I0122 10:45:04.645083 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-765775556-hgf76" podStartSLOduration=5.645059346 podStartE2EDuration="5.645059346s" podCreationTimestamp="2026-01-22 10:44:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:45:04.639776681 +0000 UTC m=+1183.869719599" watchObservedRunningTime="2026-01-22 10:45:04.645059346 +0000 UTC m=+1183.875002254" Jan 22 10:45:04 crc kubenswrapper[4752]: I0122 10:45:04.991103 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7c77556c9d-7cqmw" Jan 22 10:45:05 crc kubenswrapper[4752]: I0122 10:45:05.496015 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f5db4cc5-c4kxs" Jan 22 10:45:05 crc kubenswrapper[4752]: I0122 10:45:05.555015 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6dccb4fc9c-xqbgq"] Jan 22 10:45:05 crc kubenswrapper[4752]: I0122 10:45:05.555258 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6dccb4fc9c-xqbgq" podUID="e8778431-6322-4c5f-a77b-5c8d051c10d4" containerName="dnsmasq-dns" containerID="cri-o://08cfc20dcbe820832edef01e28d6d809ed28e5cfe0bd056ffcaaa28d30a8e6af" gracePeriod=10 Jan 22 10:45:05 crc kubenswrapper[4752]: I0122 10:45:05.817775 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Jan 22 10:45:05 crc kubenswrapper[4752]: I0122 10:45:05.826185 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Jan 22 10:45:06 crc kubenswrapper[4752]: I0122 10:45:06.645115 4752 generic.go:334] "Generic (PLEG): container finished" podID="e8778431-6322-4c5f-a77b-5c8d051c10d4" containerID="08cfc20dcbe820832edef01e28d6d809ed28e5cfe0bd056ffcaaa28d30a8e6af" exitCode=0 Jan 22 10:45:06 crc kubenswrapper[4752]: I0122 10:45:06.645293 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dccb4fc9c-xqbgq" event={"ID":"e8778431-6322-4c5f-a77b-5c8d051c10d4","Type":"ContainerDied","Data":"08cfc20dcbe820832edef01e28d6d809ed28e5cfe0bd056ffcaaa28d30a8e6af"} Jan 22 10:45:06 crc kubenswrapper[4752]: I0122 10:45:06.648244 4752 generic.go:334] "Generic (PLEG): container finished" podID="e48968c0-ac21-49af-9161-19bf5e37c9eb" containerID="369d75194e907a9b894f8ee65e5481b809e7bfb63d02aac3dc501f1bf7ff3256" exitCode=0 Jan 22 10:45:06 crc kubenswrapper[4752]: I0122 10:45:06.648350 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6vrvk" event={"ID":"e48968c0-ac21-49af-9161-19bf5e37c9eb","Type":"ContainerDied","Data":"369d75194e907a9b894f8ee65e5481b809e7bfb63d02aac3dc501f1bf7ff3256"} Jan 22 10:45:06 crc kubenswrapper[4752]: I0122 10:45:06.657146 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 22 10:45:07 crc kubenswrapper[4752]: I0122 10:45:07.289460 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5ccb469cdb-gdc7d" Jan 22 10:45:07 crc kubenswrapper[4752]: I0122 10:45:07.336586 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6dccb4fc9c-xqbgq" podUID="e8778431-6322-4c5f-a77b-5c8d051c10d4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.169:5353: connect: connection refused" Jan 22 10:45:07 crc kubenswrapper[4752]: I0122 10:45:07.378263 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5ccb469cdb-gdc7d" Jan 22 10:45:08 crc kubenswrapper[4752]: I0122 10:45:08.016677 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-765775556-hgf76" Jan 22 10:45:09 crc kubenswrapper[4752]: I0122 10:45:09.470539 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6vrvk" Jan 22 10:45:09 crc kubenswrapper[4752]: I0122 10:45:09.661769 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9btpv\" (UniqueName: \"kubernetes.io/projected/e48968c0-ac21-49af-9161-19bf5e37c9eb-kube-api-access-9btpv\") pod \"e48968c0-ac21-49af-9161-19bf5e37c9eb\" (UID: \"e48968c0-ac21-49af-9161-19bf5e37c9eb\") " Jan 22 10:45:09 crc kubenswrapper[4752]: I0122 10:45:09.662453 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e48968c0-ac21-49af-9161-19bf5e37c9eb-combined-ca-bundle\") pod \"e48968c0-ac21-49af-9161-19bf5e37c9eb\" (UID: \"e48968c0-ac21-49af-9161-19bf5e37c9eb\") " Jan 22 10:45:09 crc kubenswrapper[4752]: I0122 10:45:09.662535 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e48968c0-ac21-49af-9161-19bf5e37c9eb-scripts\") pod \"e48968c0-ac21-49af-9161-19bf5e37c9eb\" (UID: \"e48968c0-ac21-49af-9161-19bf5e37c9eb\") " Jan 22 10:45:09 crc kubenswrapper[4752]: I0122 10:45:09.662560 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e48968c0-ac21-49af-9161-19bf5e37c9eb-db-sync-config-data\") pod \"e48968c0-ac21-49af-9161-19bf5e37c9eb\" (UID: \"e48968c0-ac21-49af-9161-19bf5e37c9eb\") " Jan 22 10:45:09 crc kubenswrapper[4752]: I0122 10:45:09.662631 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e48968c0-ac21-49af-9161-19bf5e37c9eb-config-data\") pod \"e48968c0-ac21-49af-9161-19bf5e37c9eb\" (UID: \"e48968c0-ac21-49af-9161-19bf5e37c9eb\") " Jan 22 10:45:09 crc kubenswrapper[4752]: I0122 10:45:09.662697 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e48968c0-ac21-49af-9161-19bf5e37c9eb-etc-machine-id\") pod \"e48968c0-ac21-49af-9161-19bf5e37c9eb\" (UID: \"e48968c0-ac21-49af-9161-19bf5e37c9eb\") " Jan 22 10:45:09 crc kubenswrapper[4752]: I0122 10:45:09.666795 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e48968c0-ac21-49af-9161-19bf5e37c9eb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e48968c0-ac21-49af-9161-19bf5e37c9eb" (UID: "e48968c0-ac21-49af-9161-19bf5e37c9eb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:45:09 crc kubenswrapper[4752]: I0122 10:45:09.670993 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e48968c0-ac21-49af-9161-19bf5e37c9eb-scripts" (OuterVolumeSpecName: "scripts") pod "e48968c0-ac21-49af-9161-19bf5e37c9eb" (UID: "e48968c0-ac21-49af-9161-19bf5e37c9eb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:09 crc kubenswrapper[4752]: I0122 10:45:09.671917 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e48968c0-ac21-49af-9161-19bf5e37c9eb-kube-api-access-9btpv" (OuterVolumeSpecName: "kube-api-access-9btpv") pod "e48968c0-ac21-49af-9161-19bf5e37c9eb" (UID: "e48968c0-ac21-49af-9161-19bf5e37c9eb"). InnerVolumeSpecName "kube-api-access-9btpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:45:09 crc kubenswrapper[4752]: I0122 10:45:09.683025 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e48968c0-ac21-49af-9161-19bf5e37c9eb-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e48968c0-ac21-49af-9161-19bf5e37c9eb" (UID: "e48968c0-ac21-49af-9161-19bf5e37c9eb"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:09 crc kubenswrapper[4752]: I0122 10:45:09.688070 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6vrvk" event={"ID":"e48968c0-ac21-49af-9161-19bf5e37c9eb","Type":"ContainerDied","Data":"49bf21c0dfd7718587989cef2236dca77a0016bd9f90eef1044d8c01f5db6d17"} Jan 22 10:45:09 crc kubenswrapper[4752]: I0122 10:45:09.688325 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49bf21c0dfd7718587989cef2236dca77a0016bd9f90eef1044d8c01f5db6d17" Jan 22 10:45:09 crc kubenswrapper[4752]: I0122 10:45:09.688507 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6vrvk" Jan 22 10:45:09 crc kubenswrapper[4752]: I0122 10:45:09.717305 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e48968c0-ac21-49af-9161-19bf5e37c9eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e48968c0-ac21-49af-9161-19bf5e37c9eb" (UID: "e48968c0-ac21-49af-9161-19bf5e37c9eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:09 crc kubenswrapper[4752]: I0122 10:45:09.752290 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e48968c0-ac21-49af-9161-19bf5e37c9eb-config-data" (OuterVolumeSpecName: "config-data") pod "e48968c0-ac21-49af-9161-19bf5e37c9eb" (UID: "e48968c0-ac21-49af-9161-19bf5e37c9eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:09 crc kubenswrapper[4752]: I0122 10:45:09.765747 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e48968c0-ac21-49af-9161-19bf5e37c9eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:09 crc kubenswrapper[4752]: I0122 10:45:09.765794 4752 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e48968c0-ac21-49af-9161-19bf5e37c9eb-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:09 crc kubenswrapper[4752]: I0122 10:45:09.765806 4752 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e48968c0-ac21-49af-9161-19bf5e37c9eb-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:09 crc kubenswrapper[4752]: I0122 10:45:09.765817 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e48968c0-ac21-49af-9161-19bf5e37c9eb-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:09 crc kubenswrapper[4752]: I0122 10:45:09.765827 4752 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e48968c0-ac21-49af-9161-19bf5e37c9eb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:09 crc kubenswrapper[4752]: I0122 10:45:09.765838 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9btpv\" (UniqueName: \"kubernetes.io/projected/e48968c0-ac21-49af-9161-19bf5e37c9eb-kube-api-access-9btpv\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.149507 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dccb4fc9c-xqbgq" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.277685 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8778431-6322-4c5f-a77b-5c8d051c10d4-dns-svc\") pod \"e8778431-6322-4c5f-a77b-5c8d051c10d4\" (UID: \"e8778431-6322-4c5f-a77b-5c8d051c10d4\") " Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.277726 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8778431-6322-4c5f-a77b-5c8d051c10d4-ovsdbserver-sb\") pod \"e8778431-6322-4c5f-a77b-5c8d051c10d4\" (UID: \"e8778431-6322-4c5f-a77b-5c8d051c10d4\") " Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.277760 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8778431-6322-4c5f-a77b-5c8d051c10d4-config\") pod \"e8778431-6322-4c5f-a77b-5c8d051c10d4\" (UID: \"e8778431-6322-4c5f-a77b-5c8d051c10d4\") " Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.277777 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8778431-6322-4c5f-a77b-5c8d051c10d4-ovsdbserver-nb\") pod \"e8778431-6322-4c5f-a77b-5c8d051c10d4\" (UID: \"e8778431-6322-4c5f-a77b-5c8d051c10d4\") " Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.277811 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8778431-6322-4c5f-a77b-5c8d051c10d4-dns-swift-storage-0\") pod \"e8778431-6322-4c5f-a77b-5c8d051c10d4\" (UID: \"e8778431-6322-4c5f-a77b-5c8d051c10d4\") " Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.277945 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htrm2\" (UniqueName: \"kubernetes.io/projected/e8778431-6322-4c5f-a77b-5c8d051c10d4-kube-api-access-htrm2\") pod \"e8778431-6322-4c5f-a77b-5c8d051c10d4\" (UID: \"e8778431-6322-4c5f-a77b-5c8d051c10d4\") " Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.301081 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8778431-6322-4c5f-a77b-5c8d051c10d4-kube-api-access-htrm2" (OuterVolumeSpecName: "kube-api-access-htrm2") pod "e8778431-6322-4c5f-a77b-5c8d051c10d4" (UID: "e8778431-6322-4c5f-a77b-5c8d051c10d4"). InnerVolumeSpecName "kube-api-access-htrm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.366629 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8778431-6322-4c5f-a77b-5c8d051c10d4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e8778431-6322-4c5f-a77b-5c8d051c10d4" (UID: "e8778431-6322-4c5f-a77b-5c8d051c10d4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.381136 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htrm2\" (UniqueName: \"kubernetes.io/projected/e8778431-6322-4c5f-a77b-5c8d051c10d4-kube-api-access-htrm2\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.381167 4752 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8778431-6322-4c5f-a77b-5c8d051c10d4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.416381 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8778431-6322-4c5f-a77b-5c8d051c10d4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e8778431-6322-4c5f-a77b-5c8d051c10d4" (UID: "e8778431-6322-4c5f-a77b-5c8d051c10d4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.430326 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8778431-6322-4c5f-a77b-5c8d051c10d4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e8778431-6322-4c5f-a77b-5c8d051c10d4" (UID: "e8778431-6322-4c5f-a77b-5c8d051c10d4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.431485 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8778431-6322-4c5f-a77b-5c8d051c10d4-config" (OuterVolumeSpecName: "config") pod "e8778431-6322-4c5f-a77b-5c8d051c10d4" (UID: "e8778431-6322-4c5f-a77b-5c8d051c10d4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.471799 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8778431-6322-4c5f-a77b-5c8d051c10d4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e8778431-6322-4c5f-a77b-5c8d051c10d4" (UID: "e8778431-6322-4c5f-a77b-5c8d051c10d4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.483056 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8778431-6322-4c5f-a77b-5c8d051c10d4-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.483089 4752 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8778431-6322-4c5f-a77b-5c8d051c10d4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.483100 4752 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8778431-6322-4c5f-a77b-5c8d051c10d4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.483109 4752 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8778431-6322-4c5f-a77b-5c8d051c10d4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:10 crc kubenswrapper[4752]: E0122 10:45:10.685158 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"ceilometer-notification-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="fd370da5-83df-42ba-a822-7cff763d174b" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.714538 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd370da5-83df-42ba-a822-7cff763d174b","Type":"ContainerStarted","Data":"4d303f0ee7f28a29245b9c515d9656433fb564e5975e66e1df82bdddf2dad8f8"} Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.714684 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd370da5-83df-42ba-a822-7cff763d174b" containerName="sg-core" containerID="cri-o://54be2e363e89413c2d58353d464b750d052099d2757fbd887fd4719dc710a8d2" gracePeriod=30 Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.714729 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.714844 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd370da5-83df-42ba-a822-7cff763d174b" containerName="proxy-httpd" containerID="cri-o://4d303f0ee7f28a29245b9c515d9656433fb564e5975e66e1df82bdddf2dad8f8" gracePeriod=30 Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.720676 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-759c77fb4d-d9xcr" event={"ID":"bc924366-228b-413d-b8f5-9f71257add3c","Type":"ContainerStarted","Data":"8f9b90346d6fc2f1f9ef8df038634cc232377ed83bc97ee99e853bf7ea6e4062"} Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.735543 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dccb4fc9c-xqbgq" event={"ID":"e8778431-6322-4c5f-a77b-5c8d051c10d4","Type":"ContainerDied","Data":"b23920bf7171d7679fa41c43dd057e70fa7a346a4892b5739b1bc3f17cd86daa"} Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.735595 4752 scope.go:117] "RemoveContainer" containerID="08cfc20dcbe820832edef01e28d6d809ed28e5cfe0bd056ffcaaa28d30a8e6af" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.735718 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dccb4fc9c-xqbgq" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.757828 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 10:45:10 crc kubenswrapper[4752]: E0122 10:45:10.758237 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8778431-6322-4c5f-a77b-5c8d051c10d4" containerName="init" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.758256 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8778431-6322-4c5f-a77b-5c8d051c10d4" containerName="init" Jan 22 10:45:10 crc kubenswrapper[4752]: E0122 10:45:10.758266 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e48968c0-ac21-49af-9161-19bf5e37c9eb" containerName="cinder-db-sync" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.758272 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="e48968c0-ac21-49af-9161-19bf5e37c9eb" containerName="cinder-db-sync" Jan 22 10:45:10 crc kubenswrapper[4752]: E0122 10:45:10.758301 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8778431-6322-4c5f-a77b-5c8d051c10d4" containerName="dnsmasq-dns" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.758307 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8778431-6322-4c5f-a77b-5c8d051c10d4" containerName="dnsmasq-dns" Jan 22 10:45:10 crc kubenswrapper[4752]: E0122 10:45:10.758314 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b09c1a7-b13b-45f2-908c-9f3459653f0e" containerName="collect-profiles" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.758320 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b09c1a7-b13b-45f2-908c-9f3459653f0e" containerName="collect-profiles" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.758498 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b09c1a7-b13b-45f2-908c-9f3459653f0e" containerName="collect-profiles" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.758515 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8778431-6322-4c5f-a77b-5c8d051c10d4" containerName="dnsmasq-dns" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.758532 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="e48968c0-ac21-49af-9161-19bf5e37c9eb" containerName="cinder-db-sync" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.759545 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.767204 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.767389 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-gl654" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.767551 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.767738 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.793957 4752 scope.go:117] "RemoveContainer" containerID="b2e7bad9402454b1afb8873f1fbd5a7aed6e3dc42c5f64513faa1a26f2fde0a7" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.815729 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.849324 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6dccb4fc9c-xqbgq"] Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.872395 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6dccb4fc9c-xqbgq"] Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.897230 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d930b78-d90f-4db7-b354-8fd6c13799cc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6d930b78-d90f-4db7-b354-8fd6c13799cc\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.897306 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6d930b78-d90f-4db7-b354-8fd6c13799cc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6d930b78-d90f-4db7-b354-8fd6c13799cc\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.897489 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqm4j\" (UniqueName: \"kubernetes.io/projected/6d930b78-d90f-4db7-b354-8fd6c13799cc-kube-api-access-dqm4j\") pod \"cinder-scheduler-0\" (UID: \"6d930b78-d90f-4db7-b354-8fd6c13799cc\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.897597 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d930b78-d90f-4db7-b354-8fd6c13799cc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6d930b78-d90f-4db7-b354-8fd6c13799cc\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.897939 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d930b78-d90f-4db7-b354-8fd6c13799cc-scripts\") pod \"cinder-scheduler-0\" (UID: \"6d930b78-d90f-4db7-b354-8fd6c13799cc\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.898028 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d930b78-d90f-4db7-b354-8fd6c13799cc-config-data\") pod \"cinder-scheduler-0\" (UID: \"6d930b78-d90f-4db7-b354-8fd6c13799cc\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.923103 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.924360 4752 scope.go:117] "RemoveContainer" containerID="850c9676da86d08770647f31e68bb12eb2beed371f768b333eb96fbd8db40f16" Jan 22 10:45:10 crc kubenswrapper[4752]: E0122 10:45:10.924802 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(806e176d-686f-4523-822c-f519f6a6076d)\"" pod="openstack/watcher-decision-engine-0" podUID="806e176d-686f-4523-822c-f519f6a6076d" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.938677 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85c94b455f-t6lr7"] Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.953022 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85c94b455f-t6lr7" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.989337 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85c94b455f-t6lr7"] Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.999689 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d930b78-d90f-4db7-b354-8fd6c13799cc-scripts\") pod \"cinder-scheduler-0\" (UID: \"6d930b78-d90f-4db7-b354-8fd6c13799cc\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.999743 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d930b78-d90f-4db7-b354-8fd6c13799cc-config-data\") pod \"cinder-scheduler-0\" (UID: \"6d930b78-d90f-4db7-b354-8fd6c13799cc\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.999807 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d930b78-d90f-4db7-b354-8fd6c13799cc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6d930b78-d90f-4db7-b354-8fd6c13799cc\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.999843 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6d930b78-d90f-4db7-b354-8fd6c13799cc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6d930b78-d90f-4db7-b354-8fd6c13799cc\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.999900 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqm4j\" (UniqueName: \"kubernetes.io/projected/6d930b78-d90f-4db7-b354-8fd6c13799cc-kube-api-access-dqm4j\") pod \"cinder-scheduler-0\" (UID: \"6d930b78-d90f-4db7-b354-8fd6c13799cc\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:10 crc kubenswrapper[4752]: I0122 10:45:10.999923 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d930b78-d90f-4db7-b354-8fd6c13799cc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6d930b78-d90f-4db7-b354-8fd6c13799cc\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.000946 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6d930b78-d90f-4db7-b354-8fd6c13799cc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6d930b78-d90f-4db7-b354-8fd6c13799cc\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.006971 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d930b78-d90f-4db7-b354-8fd6c13799cc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6d930b78-d90f-4db7-b354-8fd6c13799cc\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.008962 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d930b78-d90f-4db7-b354-8fd6c13799cc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6d930b78-d90f-4db7-b354-8fd6c13799cc\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.011766 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d930b78-d90f-4db7-b354-8fd6c13799cc-config-data\") pod \"cinder-scheduler-0\" (UID: \"6d930b78-d90f-4db7-b354-8fd6c13799cc\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.016906 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d930b78-d90f-4db7-b354-8fd6c13799cc-scripts\") pod \"cinder-scheduler-0\" (UID: \"6d930b78-d90f-4db7-b354-8fd6c13799cc\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.024284 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-759c77fb4d-d9xcr" podStartSLOduration=9.450229083 podStartE2EDuration="17.024264212s" podCreationTimestamp="2026-01-22 10:44:54 +0000 UTC" firstStartedPulling="2026-01-22 10:44:56.338130921 +0000 UTC m=+1175.568073829" lastFinishedPulling="2026-01-22 10:45:03.91216604 +0000 UTC m=+1183.142108958" observedRunningTime="2026-01-22 10:45:10.866261648 +0000 UTC m=+1190.096204556" watchObservedRunningTime="2026-01-22 10:45:11.024264212 +0000 UTC m=+1190.254207120" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.037496 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqm4j\" (UniqueName: \"kubernetes.io/projected/6d930b78-d90f-4db7-b354-8fd6c13799cc-kube-api-access-dqm4j\") pod \"cinder-scheduler-0\" (UID: \"6d930b78-d90f-4db7-b354-8fd6c13799cc\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.082778 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.090708 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.101292 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hbm8\" (UniqueName: \"kubernetes.io/projected/ba33f293-195a-44c7-9b7f-6f57716c4fa8-kube-api-access-9hbm8\") pod \"dnsmasq-dns-85c94b455f-t6lr7\" (UID: \"ba33f293-195a-44c7-9b7f-6f57716c4fa8\") " pod="openstack/dnsmasq-dns-85c94b455f-t6lr7" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.101347 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba33f293-195a-44c7-9b7f-6f57716c4fa8-ovsdbserver-sb\") pod \"dnsmasq-dns-85c94b455f-t6lr7\" (UID: \"ba33f293-195a-44c7-9b7f-6f57716c4fa8\") " pod="openstack/dnsmasq-dns-85c94b455f-t6lr7" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.101402 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba33f293-195a-44c7-9b7f-6f57716c4fa8-dns-svc\") pod \"dnsmasq-dns-85c94b455f-t6lr7\" (UID: \"ba33f293-195a-44c7-9b7f-6f57716c4fa8\") " pod="openstack/dnsmasq-dns-85c94b455f-t6lr7" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.101451 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba33f293-195a-44c7-9b7f-6f57716c4fa8-config\") pod \"dnsmasq-dns-85c94b455f-t6lr7\" (UID: \"ba33f293-195a-44c7-9b7f-6f57716c4fa8\") " pod="openstack/dnsmasq-dns-85c94b455f-t6lr7" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.101475 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba33f293-195a-44c7-9b7f-6f57716c4fa8-ovsdbserver-nb\") pod \"dnsmasq-dns-85c94b455f-t6lr7\" (UID: \"ba33f293-195a-44c7-9b7f-6f57716c4fa8\") " pod="openstack/dnsmasq-dns-85c94b455f-t6lr7" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.101505 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba33f293-195a-44c7-9b7f-6f57716c4fa8-dns-swift-storage-0\") pod \"dnsmasq-dns-85c94b455f-t6lr7\" (UID: \"ba33f293-195a-44c7-9b7f-6f57716c4fa8\") " pod="openstack/dnsmasq-dns-85c94b455f-t6lr7" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.138280 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.140187 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8778431-6322-4c5f-a77b-5c8d051c10d4" path="/var/lib/kubelet/pods/e8778431-6322-4c5f-a77b-5c8d051c10d4/volumes" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.140678 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.141139 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.202903 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/425d67f6-213c-4e6b-be32-207e2dd95409-config-data\") pod \"cinder-api-0\" (UID: \"425d67f6-213c-4e6b-be32-207e2dd95409\") " pod="openstack/cinder-api-0" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.202951 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba33f293-195a-44c7-9b7f-6f57716c4fa8-dns-swift-storage-0\") pod \"dnsmasq-dns-85c94b455f-t6lr7\" (UID: \"ba33f293-195a-44c7-9b7f-6f57716c4fa8\") " pod="openstack/dnsmasq-dns-85c94b455f-t6lr7" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.202986 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/425d67f6-213c-4e6b-be32-207e2dd95409-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"425d67f6-213c-4e6b-be32-207e2dd95409\") " pod="openstack/cinder-api-0" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.203021 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/425d67f6-213c-4e6b-be32-207e2dd95409-config-data-custom\") pod \"cinder-api-0\" (UID: \"425d67f6-213c-4e6b-be32-207e2dd95409\") " pod="openstack/cinder-api-0" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.203070 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hbm8\" (UniqueName: \"kubernetes.io/projected/ba33f293-195a-44c7-9b7f-6f57716c4fa8-kube-api-access-9hbm8\") pod \"dnsmasq-dns-85c94b455f-t6lr7\" (UID: \"ba33f293-195a-44c7-9b7f-6f57716c4fa8\") " pod="openstack/dnsmasq-dns-85c94b455f-t6lr7" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.203103 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba33f293-195a-44c7-9b7f-6f57716c4fa8-ovsdbserver-sb\") pod \"dnsmasq-dns-85c94b455f-t6lr7\" (UID: \"ba33f293-195a-44c7-9b7f-6f57716c4fa8\") " pod="openstack/dnsmasq-dns-85c94b455f-t6lr7" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.203123 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/425d67f6-213c-4e6b-be32-207e2dd95409-scripts\") pod \"cinder-api-0\" (UID: \"425d67f6-213c-4e6b-be32-207e2dd95409\") " pod="openstack/cinder-api-0" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.203169 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba33f293-195a-44c7-9b7f-6f57716c4fa8-dns-svc\") pod \"dnsmasq-dns-85c94b455f-t6lr7\" (UID: \"ba33f293-195a-44c7-9b7f-6f57716c4fa8\") " pod="openstack/dnsmasq-dns-85c94b455f-t6lr7" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.203211 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/425d67f6-213c-4e6b-be32-207e2dd95409-logs\") pod \"cinder-api-0\" (UID: \"425d67f6-213c-4e6b-be32-207e2dd95409\") " pod="openstack/cinder-api-0" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.203226 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba33f293-195a-44c7-9b7f-6f57716c4fa8-config\") pod \"dnsmasq-dns-85c94b455f-t6lr7\" (UID: \"ba33f293-195a-44c7-9b7f-6f57716c4fa8\") " pod="openstack/dnsmasq-dns-85c94b455f-t6lr7" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.203252 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba33f293-195a-44c7-9b7f-6f57716c4fa8-ovsdbserver-nb\") pod \"dnsmasq-dns-85c94b455f-t6lr7\" (UID: \"ba33f293-195a-44c7-9b7f-6f57716c4fa8\") " pod="openstack/dnsmasq-dns-85c94b455f-t6lr7" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.203275 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/425d67f6-213c-4e6b-be32-207e2dd95409-etc-machine-id\") pod \"cinder-api-0\" (UID: \"425d67f6-213c-4e6b-be32-207e2dd95409\") " pod="openstack/cinder-api-0" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.203296 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ltv2\" (UniqueName: \"kubernetes.io/projected/425d67f6-213c-4e6b-be32-207e2dd95409-kube-api-access-7ltv2\") pod \"cinder-api-0\" (UID: \"425d67f6-213c-4e6b-be32-207e2dd95409\") " pod="openstack/cinder-api-0" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.204789 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba33f293-195a-44c7-9b7f-6f57716c4fa8-config\") pod \"dnsmasq-dns-85c94b455f-t6lr7\" (UID: \"ba33f293-195a-44c7-9b7f-6f57716c4fa8\") " pod="openstack/dnsmasq-dns-85c94b455f-t6lr7" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.208587 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba33f293-195a-44c7-9b7f-6f57716c4fa8-ovsdbserver-nb\") pod \"dnsmasq-dns-85c94b455f-t6lr7\" (UID: \"ba33f293-195a-44c7-9b7f-6f57716c4fa8\") " pod="openstack/dnsmasq-dns-85c94b455f-t6lr7" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.213147 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba33f293-195a-44c7-9b7f-6f57716c4fa8-dns-swift-storage-0\") pod \"dnsmasq-dns-85c94b455f-t6lr7\" (UID: \"ba33f293-195a-44c7-9b7f-6f57716c4fa8\") " pod="openstack/dnsmasq-dns-85c94b455f-t6lr7" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.213961 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba33f293-195a-44c7-9b7f-6f57716c4fa8-ovsdbserver-sb\") pod \"dnsmasq-dns-85c94b455f-t6lr7\" (UID: \"ba33f293-195a-44c7-9b7f-6f57716c4fa8\") " pod="openstack/dnsmasq-dns-85c94b455f-t6lr7" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.214186 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba33f293-195a-44c7-9b7f-6f57716c4fa8-dns-svc\") pod \"dnsmasq-dns-85c94b455f-t6lr7\" (UID: \"ba33f293-195a-44c7-9b7f-6f57716c4fa8\") " pod="openstack/dnsmasq-dns-85c94b455f-t6lr7" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.231558 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hbm8\" (UniqueName: \"kubernetes.io/projected/ba33f293-195a-44c7-9b7f-6f57716c4fa8-kube-api-access-9hbm8\") pod \"dnsmasq-dns-85c94b455f-t6lr7\" (UID: \"ba33f293-195a-44c7-9b7f-6f57716c4fa8\") " pod="openstack/dnsmasq-dns-85c94b455f-t6lr7" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.305395 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/425d67f6-213c-4e6b-be32-207e2dd95409-scripts\") pod \"cinder-api-0\" (UID: \"425d67f6-213c-4e6b-be32-207e2dd95409\") " pod="openstack/cinder-api-0" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.305568 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/425d67f6-213c-4e6b-be32-207e2dd95409-logs\") pod \"cinder-api-0\" (UID: \"425d67f6-213c-4e6b-be32-207e2dd95409\") " pod="openstack/cinder-api-0" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.305622 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/425d67f6-213c-4e6b-be32-207e2dd95409-etc-machine-id\") pod \"cinder-api-0\" (UID: \"425d67f6-213c-4e6b-be32-207e2dd95409\") " pod="openstack/cinder-api-0" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.305653 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ltv2\" (UniqueName: \"kubernetes.io/projected/425d67f6-213c-4e6b-be32-207e2dd95409-kube-api-access-7ltv2\") pod \"cinder-api-0\" (UID: \"425d67f6-213c-4e6b-be32-207e2dd95409\") " pod="openstack/cinder-api-0" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.305716 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/425d67f6-213c-4e6b-be32-207e2dd95409-config-data\") pod \"cinder-api-0\" (UID: \"425d67f6-213c-4e6b-be32-207e2dd95409\") " pod="openstack/cinder-api-0" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.305765 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/425d67f6-213c-4e6b-be32-207e2dd95409-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"425d67f6-213c-4e6b-be32-207e2dd95409\") " pod="openstack/cinder-api-0" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.305801 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/425d67f6-213c-4e6b-be32-207e2dd95409-config-data-custom\") pod \"cinder-api-0\" (UID: \"425d67f6-213c-4e6b-be32-207e2dd95409\") " pod="openstack/cinder-api-0" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.307743 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/425d67f6-213c-4e6b-be32-207e2dd95409-etc-machine-id\") pod \"cinder-api-0\" (UID: \"425d67f6-213c-4e6b-be32-207e2dd95409\") " pod="openstack/cinder-api-0" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.309510 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/425d67f6-213c-4e6b-be32-207e2dd95409-logs\") pod \"cinder-api-0\" (UID: \"425d67f6-213c-4e6b-be32-207e2dd95409\") " pod="openstack/cinder-api-0" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.313209 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85c94b455f-t6lr7" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.316119 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/425d67f6-213c-4e6b-be32-207e2dd95409-scripts\") pod \"cinder-api-0\" (UID: \"425d67f6-213c-4e6b-be32-207e2dd95409\") " pod="openstack/cinder-api-0" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.317338 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/425d67f6-213c-4e6b-be32-207e2dd95409-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"425d67f6-213c-4e6b-be32-207e2dd95409\") " pod="openstack/cinder-api-0" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.321666 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/425d67f6-213c-4e6b-be32-207e2dd95409-config-data\") pod \"cinder-api-0\" (UID: \"425d67f6-213c-4e6b-be32-207e2dd95409\") " pod="openstack/cinder-api-0" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.327538 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ltv2\" (UniqueName: \"kubernetes.io/projected/425d67f6-213c-4e6b-be32-207e2dd95409-kube-api-access-7ltv2\") pod \"cinder-api-0\" (UID: \"425d67f6-213c-4e6b-be32-207e2dd95409\") " pod="openstack/cinder-api-0" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.330883 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/425d67f6-213c-4e6b-be32-207e2dd95409-config-data-custom\") pod \"cinder-api-0\" (UID: \"425d67f6-213c-4e6b-be32-207e2dd95409\") " pod="openstack/cinder-api-0" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.481483 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.804492 4752 generic.go:334] "Generic (PLEG): container finished" podID="28a90e4a-ca62-4bd6-bfee-29cfda8b7478" containerID="db9e7fc0f27aa6579f531cae42db36fd5a919ac9df8e49f6debefcd6af207b5d" exitCode=137 Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.804710 4752 generic.go:334] "Generic (PLEG): container finished" podID="28a90e4a-ca62-4bd6-bfee-29cfda8b7478" containerID="f680d614c0afd3b21a8db8e64421b7f550e74cf26cf9122c1c2dcfe05b10ed80" exitCode=137 Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.804782 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6864464dd5-tktm8" event={"ID":"28a90e4a-ca62-4bd6-bfee-29cfda8b7478","Type":"ContainerDied","Data":"db9e7fc0f27aa6579f531cae42db36fd5a919ac9df8e49f6debefcd6af207b5d"} Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.804807 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6864464dd5-tktm8" event={"ID":"28a90e4a-ca62-4bd6-bfee-29cfda8b7478","Type":"ContainerDied","Data":"f680d614c0afd3b21a8db8e64421b7f550e74cf26cf9122c1c2dcfe05b10ed80"} Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.817879 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-677cfc4d77-thl6g" event={"ID":"2e64c278-d203-43c6-9899-8af0a062c3da","Type":"ContainerStarted","Data":"f8ed3df8330c6f9ff66fd23d650aaf6da9f42b4ca350009d22706f64ff89a54b"} Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.826973 4752 generic.go:334] "Generic (PLEG): container finished" podID="fd370da5-83df-42ba-a822-7cff763d174b" containerID="54be2e363e89413c2d58353d464b750d052099d2757fbd887fd4719dc710a8d2" exitCode=2 Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.827047 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd370da5-83df-42ba-a822-7cff763d174b","Type":"ContainerDied","Data":"54be2e363e89413c2d58353d464b750d052099d2757fbd887fd4719dc710a8d2"} Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.843263 4752 generic.go:334] "Generic (PLEG): container finished" podID="3facab56-48f5-4f06-b879-86a9fb933537" containerID="e934eb444bf8aa33dc12592d2ae0702f33194a403b7548f34e096e4c9f84873f" exitCode=137 Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.843300 4752 generic.go:334] "Generic (PLEG): container finished" podID="3facab56-48f5-4f06-b879-86a9fb933537" containerID="d46e798bfc10137eb09aa25b2567d922c6b30720aa3745d474eff6798086f491" exitCode=137 Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.847437 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57957469d5-fx7bl" event={"ID":"3facab56-48f5-4f06-b879-86a9fb933537","Type":"ContainerDied","Data":"e934eb444bf8aa33dc12592d2ae0702f33194a403b7548f34e096e4c9f84873f"} Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.847492 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57957469d5-fx7bl" event={"ID":"3facab56-48f5-4f06-b879-86a9fb933537","Type":"ContainerDied","Data":"d46e798bfc10137eb09aa25b2567d922c6b30720aa3745d474eff6798086f491"} Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.859075 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-677cfc4d77-thl6g" podStartSLOduration=9.972827543 podStartE2EDuration="17.859054393s" podCreationTimestamp="2026-01-22 10:44:54 +0000 UTC" firstStartedPulling="2026-01-22 10:44:56.02279596 +0000 UTC m=+1175.252738868" lastFinishedPulling="2026-01-22 10:45:03.90902281 +0000 UTC m=+1183.138965718" observedRunningTime="2026-01-22 10:45:11.839404283 +0000 UTC m=+1191.069347191" watchObservedRunningTime="2026-01-22 10:45:11.859054393 +0000 UTC m=+1191.088997301" Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.941070 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.955869 4752 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 10:45:11 crc kubenswrapper[4752]: I0122 10:45:11.984664 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57957469d5-fx7bl" Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.148554 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3facab56-48f5-4f06-b879-86a9fb933537-horizon-secret-key\") pod \"3facab56-48f5-4f06-b879-86a9fb933537\" (UID: \"3facab56-48f5-4f06-b879-86a9fb933537\") " Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.148631 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3facab56-48f5-4f06-b879-86a9fb933537-logs\") pod \"3facab56-48f5-4f06-b879-86a9fb933537\" (UID: \"3facab56-48f5-4f06-b879-86a9fb933537\") " Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.148696 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th7zq\" (UniqueName: \"kubernetes.io/projected/3facab56-48f5-4f06-b879-86a9fb933537-kube-api-access-th7zq\") pod \"3facab56-48f5-4f06-b879-86a9fb933537\" (UID: \"3facab56-48f5-4f06-b879-86a9fb933537\") " Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.148786 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3facab56-48f5-4f06-b879-86a9fb933537-scripts\") pod \"3facab56-48f5-4f06-b879-86a9fb933537\" (UID: \"3facab56-48f5-4f06-b879-86a9fb933537\") " Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.149022 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3facab56-48f5-4f06-b879-86a9fb933537-config-data\") pod \"3facab56-48f5-4f06-b879-86a9fb933537\" (UID: \"3facab56-48f5-4f06-b879-86a9fb933537\") " Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.150224 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3facab56-48f5-4f06-b879-86a9fb933537-logs" (OuterVolumeSpecName: "logs") pod "3facab56-48f5-4f06-b879-86a9fb933537" (UID: "3facab56-48f5-4f06-b879-86a9fb933537"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.155294 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3facab56-48f5-4f06-b879-86a9fb933537-kube-api-access-th7zq" (OuterVolumeSpecName: "kube-api-access-th7zq") pod "3facab56-48f5-4f06-b879-86a9fb933537" (UID: "3facab56-48f5-4f06-b879-86a9fb933537"). InnerVolumeSpecName "kube-api-access-th7zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.160899 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3facab56-48f5-4f06-b879-86a9fb933537-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "3facab56-48f5-4f06-b879-86a9fb933537" (UID: "3facab56-48f5-4f06-b879-86a9fb933537"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.195965 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6864464dd5-tktm8" Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.223683 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3facab56-48f5-4f06-b879-86a9fb933537-scripts" (OuterVolumeSpecName: "scripts") pod "3facab56-48f5-4f06-b879-86a9fb933537" (UID: "3facab56-48f5-4f06-b879-86a9fb933537"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.234627 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3facab56-48f5-4f06-b879-86a9fb933537-config-data" (OuterVolumeSpecName: "config-data") pod "3facab56-48f5-4f06-b879-86a9fb933537" (UID: "3facab56-48f5-4f06-b879-86a9fb933537"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.264165 4752 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3facab56-48f5-4f06-b879-86a9fb933537-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.264195 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3facab56-48f5-4f06-b879-86a9fb933537-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.264206 4752 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3facab56-48f5-4f06-b879-86a9fb933537-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.264215 4752 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3facab56-48f5-4f06-b879-86a9fb933537-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.264224 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th7zq\" (UniqueName: \"kubernetes.io/projected/3facab56-48f5-4f06-b879-86a9fb933537-kube-api-access-th7zq\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.377304 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28a90e4a-ca62-4bd6-bfee-29cfda8b7478-scripts\") pod \"28a90e4a-ca62-4bd6-bfee-29cfda8b7478\" (UID: \"28a90e4a-ca62-4bd6-bfee-29cfda8b7478\") " Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.377594 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28a90e4a-ca62-4bd6-bfee-29cfda8b7478-config-data\") pod \"28a90e4a-ca62-4bd6-bfee-29cfda8b7478\" (UID: \"28a90e4a-ca62-4bd6-bfee-29cfda8b7478\") " Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.377689 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28a90e4a-ca62-4bd6-bfee-29cfda8b7478-logs\") pod \"28a90e4a-ca62-4bd6-bfee-29cfda8b7478\" (UID: \"28a90e4a-ca62-4bd6-bfee-29cfda8b7478\") " Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.377717 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/28a90e4a-ca62-4bd6-bfee-29cfda8b7478-horizon-secret-key\") pod \"28a90e4a-ca62-4bd6-bfee-29cfda8b7478\" (UID: \"28a90e4a-ca62-4bd6-bfee-29cfda8b7478\") " Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.377783 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdbvx\" (UniqueName: \"kubernetes.io/projected/28a90e4a-ca62-4bd6-bfee-29cfda8b7478-kube-api-access-cdbvx\") pod \"28a90e4a-ca62-4bd6-bfee-29cfda8b7478\" (UID: \"28a90e4a-ca62-4bd6-bfee-29cfda8b7478\") " Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.378167 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28a90e4a-ca62-4bd6-bfee-29cfda8b7478-logs" (OuterVolumeSpecName: "logs") pod "28a90e4a-ca62-4bd6-bfee-29cfda8b7478" (UID: "28a90e4a-ca62-4bd6-bfee-29cfda8b7478"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:45:12 crc kubenswrapper[4752]: W0122 10:45:12.396998 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba33f293_195a_44c7_9b7f_6f57716c4fa8.slice/crio-da90a9cf82f5e631582e3611a39d7325d87e90862f009c2c825efd6fd32d2374 WatchSource:0}: Error finding container da90a9cf82f5e631582e3611a39d7325d87e90862f009c2c825efd6fd32d2374: Status 404 returned error can't find the container with id da90a9cf82f5e631582e3611a39d7325d87e90862f009c2c825efd6fd32d2374 Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.397739 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28a90e4a-ca62-4bd6-bfee-29cfda8b7478-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "28a90e4a-ca62-4bd6-bfee-29cfda8b7478" (UID: "28a90e4a-ca62-4bd6-bfee-29cfda8b7478"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.397951 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28a90e4a-ca62-4bd6-bfee-29cfda8b7478-kube-api-access-cdbvx" (OuterVolumeSpecName: "kube-api-access-cdbvx") pod "28a90e4a-ca62-4bd6-bfee-29cfda8b7478" (UID: "28a90e4a-ca62-4bd6-bfee-29cfda8b7478"). InnerVolumeSpecName "kube-api-access-cdbvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.413758 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85c94b455f-t6lr7"] Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.448420 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28a90e4a-ca62-4bd6-bfee-29cfda8b7478-scripts" (OuterVolumeSpecName: "scripts") pod "28a90e4a-ca62-4bd6-bfee-29cfda8b7478" (UID: "28a90e4a-ca62-4bd6-bfee-29cfda8b7478"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.481388 4752 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28a90e4a-ca62-4bd6-bfee-29cfda8b7478-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.481420 4752 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28a90e4a-ca62-4bd6-bfee-29cfda8b7478-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.481430 4752 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/28a90e4a-ca62-4bd6-bfee-29cfda8b7478-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.481455 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdbvx\" (UniqueName: \"kubernetes.io/projected/28a90e4a-ca62-4bd6-bfee-29cfda8b7478-kube-api-access-cdbvx\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.516629 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28a90e4a-ca62-4bd6-bfee-29cfda8b7478-config-data" (OuterVolumeSpecName: "config-data") pod "28a90e4a-ca62-4bd6-bfee-29cfda8b7478" (UID: "28a90e4a-ca62-4bd6-bfee-29cfda8b7478"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.524908 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.584093 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28a90e4a-ca62-4bd6-bfee-29cfda8b7478-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.762268 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-765775556-hgf76" Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.822723 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5ccb469cdb-gdc7d"] Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.823238 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5ccb469cdb-gdc7d" podUID="dad95ced-e983-4d80-8901-8cd6537337cf" containerName="barbican-api-log" containerID="cri-o://851e342a903807eb9a64f951507bbc2f5895b4d331350d6874a0cb759d075578" gracePeriod=30 Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.823636 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5ccb469cdb-gdc7d" podUID="dad95ced-e983-4d80-8901-8cd6537337cf" containerName="barbican-api" containerID="cri-o://e703b6f9a9799ba9383ee40ee47c862b1adc85d96047aba84bd22897846565f3" gracePeriod=30 Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.837450 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5ccb469cdb-gdc7d" podUID="dad95ced-e983-4d80-8901-8cd6537337cf" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.183:9311/healthcheck\": EOF" Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.865339 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6864464dd5-tktm8" event={"ID":"28a90e4a-ca62-4bd6-bfee-29cfda8b7478","Type":"ContainerDied","Data":"bb5623936602559956d65127edcfc8ac655b5ab02790441b30485aa9995184f3"} Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.865368 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6864464dd5-tktm8" Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.865395 4752 scope.go:117] "RemoveContainer" containerID="db9e7fc0f27aa6579f531cae42db36fd5a919ac9df8e49f6debefcd6af207b5d" Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.893949 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57957469d5-fx7bl" event={"ID":"3facab56-48f5-4f06-b879-86a9fb933537","Type":"ContainerDied","Data":"a999a074d6997a7d85ee6440101bb5c7f0e23d0da3d1b5ab87776c41e034ba80"} Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.894040 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57957469d5-fx7bl" Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.902976 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85c94b455f-t6lr7" event={"ID":"ba33f293-195a-44c7-9b7f-6f57716c4fa8","Type":"ContainerStarted","Data":"da90a9cf82f5e631582e3611a39d7325d87e90862f009c2c825efd6fd32d2374"} Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.904293 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"425d67f6-213c-4e6b-be32-207e2dd95409","Type":"ContainerStarted","Data":"50cdefd18206ec2d0628cdb2b40c470840d8c0bb837a12410ac1595d05a37017"} Jan 22 10:45:12 crc kubenswrapper[4752]: I0122 10:45:12.905950 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6d930b78-d90f-4db7-b354-8fd6c13799cc","Type":"ContainerStarted","Data":"c3ca8d330cd17439ecad8b28f42724f20cba4aaf3c1eff2f037058e7396f0351"} Jan 22 10:45:13 crc kubenswrapper[4752]: I0122 10:45:13.274819 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-57957469d5-fx7bl"] Jan 22 10:45:13 crc kubenswrapper[4752]: I0122 10:45:13.275103 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-57957469d5-fx7bl"] Jan 22 10:45:13 crc kubenswrapper[4752]: I0122 10:45:13.282916 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 22 10:45:13 crc kubenswrapper[4752]: I0122 10:45:13.310490 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6864464dd5-tktm8"] Jan 22 10:45:13 crc kubenswrapper[4752]: I0122 10:45:13.335556 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6864464dd5-tktm8"] Jan 22 10:45:13 crc kubenswrapper[4752]: I0122 10:45:13.470319 4752 scope.go:117] "RemoveContainer" containerID="f680d614c0afd3b21a8db8e64421b7f550e74cf26cf9122c1c2dcfe05b10ed80" Jan 22 10:45:13 crc kubenswrapper[4752]: I0122 10:45:13.520638 4752 scope.go:117] "RemoveContainer" containerID="e934eb444bf8aa33dc12592d2ae0702f33194a403b7548f34e096e4c9f84873f" Jan 22 10:45:13 crc kubenswrapper[4752]: I0122 10:45:13.922354 4752 generic.go:334] "Generic (PLEG): container finished" podID="ba33f293-195a-44c7-9b7f-6f57716c4fa8" containerID="3c54cda7d53f54fddf651d1a3dc0df08148d9c7a45677d43676fbf24127cafe7" exitCode=0 Jan 22 10:45:13 crc kubenswrapper[4752]: I0122 10:45:13.922444 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85c94b455f-t6lr7" event={"ID":"ba33f293-195a-44c7-9b7f-6f57716c4fa8","Type":"ContainerDied","Data":"3c54cda7d53f54fddf651d1a3dc0df08148d9c7a45677d43676fbf24127cafe7"} Jan 22 10:45:13 crc kubenswrapper[4752]: I0122 10:45:13.928905 4752 generic.go:334] "Generic (PLEG): container finished" podID="dad95ced-e983-4d80-8901-8cd6537337cf" containerID="851e342a903807eb9a64f951507bbc2f5895b4d331350d6874a0cb759d075578" exitCode=143 Jan 22 10:45:13 crc kubenswrapper[4752]: I0122 10:45:13.928963 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5ccb469cdb-gdc7d" event={"ID":"dad95ced-e983-4d80-8901-8cd6537337cf","Type":"ContainerDied","Data":"851e342a903807eb9a64f951507bbc2f5895b4d331350d6874a0cb759d075578"} Jan 22 10:45:13 crc kubenswrapper[4752]: I0122 10:45:13.937697 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6d930b78-d90f-4db7-b354-8fd6c13799cc","Type":"ContainerStarted","Data":"ed8dd68e69dd3c863cb64200db90146e0d36460146a2aacaccd419733aa8e462"} Jan 22 10:45:13 crc kubenswrapper[4752]: I0122 10:45:13.975813 4752 scope.go:117] "RemoveContainer" containerID="d46e798bfc10137eb09aa25b2567d922c6b30720aa3745d474eff6798086f491" Jan 22 10:45:14 crc kubenswrapper[4752]: I0122 10:45:14.767024 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-56b8d5fdb8-7gp4n" Jan 22 10:45:14 crc kubenswrapper[4752]: I0122 10:45:14.960670 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85c94b455f-t6lr7" event={"ID":"ba33f293-195a-44c7-9b7f-6f57716c4fa8","Type":"ContainerStarted","Data":"0e70974e53dad844fb03bc4851b13ebae7c5106575d98127898a815f2adf80d8"} Jan 22 10:45:14 crc kubenswrapper[4752]: I0122 10:45:14.961573 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85c94b455f-t6lr7" Jan 22 10:45:14 crc kubenswrapper[4752]: I0122 10:45:14.963900 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"425d67f6-213c-4e6b-be32-207e2dd95409","Type":"ContainerStarted","Data":"0c398fd3ff9d620a766027ff54aa8be93d6a2b256981c1f362be966f531e5bf6"} Jan 22 10:45:14 crc kubenswrapper[4752]: I0122 10:45:14.967437 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6d930b78-d90f-4db7-b354-8fd6c13799cc","Type":"ContainerStarted","Data":"560d059f913c7ff1354ef962ab43054198779c6e372e95129a6f846dc316710d"} Jan 22 10:45:14 crc kubenswrapper[4752]: I0122 10:45:14.984406 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85c94b455f-t6lr7" podStartSLOduration=4.98438387 podStartE2EDuration="4.98438387s" podCreationTimestamp="2026-01-22 10:45:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:45:14.982561403 +0000 UTC m=+1194.212504311" watchObservedRunningTime="2026-01-22 10:45:14.98438387 +0000 UTC m=+1194.214326798" Jan 22 10:45:15 crc kubenswrapper[4752]: I0122 10:45:15.005626 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.143365181 podStartE2EDuration="5.00560904s" podCreationTimestamp="2026-01-22 10:45:10 +0000 UTC" firstStartedPulling="2026-01-22 10:45:11.955556141 +0000 UTC m=+1191.185499049" lastFinishedPulling="2026-01-22 10:45:12.8178 +0000 UTC m=+1192.047742908" observedRunningTime="2026-01-22 10:45:15.002969003 +0000 UTC m=+1194.232911911" watchObservedRunningTime="2026-01-22 10:45:15.00560904 +0000 UTC m=+1194.235551948" Jan 22 10:45:15 crc kubenswrapper[4752]: I0122 10:45:15.111798 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28a90e4a-ca62-4bd6-bfee-29cfda8b7478" path="/var/lib/kubelet/pods/28a90e4a-ca62-4bd6-bfee-29cfda8b7478/volumes" Jan 22 10:45:15 crc kubenswrapper[4752]: I0122 10:45:15.112497 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3facab56-48f5-4f06-b879-86a9fb933537" path="/var/lib/kubelet/pods/3facab56-48f5-4f06-b879-86a9fb933537/volumes" Jan 22 10:45:15 crc kubenswrapper[4752]: I0122 10:45:15.824644 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5ccb469cdb-gdc7d" podUID="dad95ced-e983-4d80-8901-8cd6537337cf" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.183:9311/healthcheck\": dial tcp 10.217.0.183:9311: connect: connection refused" Jan 22 10:45:15 crc kubenswrapper[4752]: I0122 10:45:15.824644 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5ccb469cdb-gdc7d" podUID="dad95ced-e983-4d80-8901-8cd6537337cf" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.183:9311/healthcheck\": dial tcp 10.217.0.183:9311: connect: connection refused" Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:15.998939 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"425d67f6-213c-4e6b-be32-207e2dd95409","Type":"ContainerStarted","Data":"c2bd1913b64969529ed5ef25e08e180973c8b91c65dd8aacf44d02fff0e8f3cc"} Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:15.999089 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="425d67f6-213c-4e6b-be32-207e2dd95409" containerName="cinder-api-log" containerID="cri-o://0c398fd3ff9d620a766027ff54aa8be93d6a2b256981c1f362be966f531e5bf6" gracePeriod=30 Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:15.999336 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:15.999579 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="425d67f6-213c-4e6b-be32-207e2dd95409" containerName="cinder-api" containerID="cri-o://c2bd1913b64969529ed5ef25e08e180973c8b91c65dd8aacf44d02fff0e8f3cc" gracePeriod=30 Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.002140 4752 generic.go:334] "Generic (PLEG): container finished" podID="dad95ced-e983-4d80-8901-8cd6537337cf" containerID="e703b6f9a9799ba9383ee40ee47c862b1adc85d96047aba84bd22897846565f3" exitCode=0 Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.002266 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5ccb469cdb-gdc7d" event={"ID":"dad95ced-e983-4d80-8901-8cd6537337cf","Type":"ContainerDied","Data":"e703b6f9a9799ba9383ee40ee47c862b1adc85d96047aba84bd22897846565f3"} Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.039387 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.039362938 podStartE2EDuration="6.039362938s" podCreationTimestamp="2026-01-22 10:45:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:45:16.027718971 +0000 UTC m=+1195.257661879" watchObservedRunningTime="2026-01-22 10:45:16.039362938 +0000 UTC m=+1195.269305846" Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.095148 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.313269 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5ccb469cdb-gdc7d" Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.421962 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dad95ced-e983-4d80-8901-8cd6537337cf-config-data-custom\") pod \"dad95ced-e983-4d80-8901-8cd6537337cf\" (UID: \"dad95ced-e983-4d80-8901-8cd6537337cf\") " Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.422118 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dad95ced-e983-4d80-8901-8cd6537337cf-config-data\") pod \"dad95ced-e983-4d80-8901-8cd6537337cf\" (UID: \"dad95ced-e983-4d80-8901-8cd6537337cf\") " Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.422221 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dad95ced-e983-4d80-8901-8cd6537337cf-combined-ca-bundle\") pod \"dad95ced-e983-4d80-8901-8cd6537337cf\" (UID: \"dad95ced-e983-4d80-8901-8cd6537337cf\") " Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.422293 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dad95ced-e983-4d80-8901-8cd6537337cf-logs\") pod \"dad95ced-e983-4d80-8901-8cd6537337cf\" (UID: \"dad95ced-e983-4d80-8901-8cd6537337cf\") " Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.422359 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzcft\" (UniqueName: \"kubernetes.io/projected/dad95ced-e983-4d80-8901-8cd6537337cf-kube-api-access-fzcft\") pod \"dad95ced-e983-4d80-8901-8cd6537337cf\" (UID: \"dad95ced-e983-4d80-8901-8cd6537337cf\") " Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.424755 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dad95ced-e983-4d80-8901-8cd6537337cf-logs" (OuterVolumeSpecName: "logs") pod "dad95ced-e983-4d80-8901-8cd6537337cf" (UID: "dad95ced-e983-4d80-8901-8cd6537337cf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.434287 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dad95ced-e983-4d80-8901-8cd6537337cf-kube-api-access-fzcft" (OuterVolumeSpecName: "kube-api-access-fzcft") pod "dad95ced-e983-4d80-8901-8cd6537337cf" (UID: "dad95ced-e983-4d80-8901-8cd6537337cf"). InnerVolumeSpecName "kube-api-access-fzcft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.434686 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dad95ced-e983-4d80-8901-8cd6537337cf-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "dad95ced-e983-4d80-8901-8cd6537337cf" (UID: "dad95ced-e983-4d80-8901-8cd6537337cf"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.463996 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dad95ced-e983-4d80-8901-8cd6537337cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dad95ced-e983-4d80-8901-8cd6537337cf" (UID: "dad95ced-e983-4d80-8901-8cd6537337cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.492237 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dad95ced-e983-4d80-8901-8cd6537337cf-config-data" (OuterVolumeSpecName: "config-data") pod "dad95ced-e983-4d80-8901-8cd6537337cf" (UID: "dad95ced-e983-4d80-8901-8cd6537337cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.538264 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzcft\" (UniqueName: \"kubernetes.io/projected/dad95ced-e983-4d80-8901-8cd6537337cf-kube-api-access-fzcft\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.538302 4752 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dad95ced-e983-4d80-8901-8cd6537337cf-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.538316 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dad95ced-e983-4d80-8901-8cd6537337cf-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.538330 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dad95ced-e983-4d80-8901-8cd6537337cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.538343 4752 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dad95ced-e983-4d80-8901-8cd6537337cf-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.675802 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.761661 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-56b8d5fdb8-7gp4n" Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.829349 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7c77556c9d-7cqmw"] Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.829593 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7c77556c9d-7cqmw" podUID="5ba449ad-098c-4918-9403-750b0c29ee93" containerName="horizon-log" containerID="cri-o://ee95aea4555325bf07e8ed3255ffe4d6d9d9f485d9c00e2707350ed5c6af4b29" gracePeriod=30 Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.829697 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7c77556c9d-7cqmw" podUID="5ba449ad-098c-4918-9403-750b0c29ee93" containerName="horizon" containerID="cri-o://9aadbd246544b4c33e87afdab4b26ff2bfd1c7d8c83899d5519375d8b2cfa6d6" gracePeriod=30 Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.846748 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/425d67f6-213c-4e6b-be32-207e2dd95409-logs\") pod \"425d67f6-213c-4e6b-be32-207e2dd95409\" (UID: \"425d67f6-213c-4e6b-be32-207e2dd95409\") " Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.846819 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/425d67f6-213c-4e6b-be32-207e2dd95409-config-data-custom\") pod \"425d67f6-213c-4e6b-be32-207e2dd95409\" (UID: \"425d67f6-213c-4e6b-be32-207e2dd95409\") " Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.854313 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/425d67f6-213c-4e6b-be32-207e2dd95409-logs" (OuterVolumeSpecName: "logs") pod "425d67f6-213c-4e6b-be32-207e2dd95409" (UID: "425d67f6-213c-4e6b-be32-207e2dd95409"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.854395 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/425d67f6-213c-4e6b-be32-207e2dd95409-combined-ca-bundle\") pod \"425d67f6-213c-4e6b-be32-207e2dd95409\" (UID: \"425d67f6-213c-4e6b-be32-207e2dd95409\") " Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.854494 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/425d67f6-213c-4e6b-be32-207e2dd95409-config-data\") pod \"425d67f6-213c-4e6b-be32-207e2dd95409\" (UID: \"425d67f6-213c-4e6b-be32-207e2dd95409\") " Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.854559 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/425d67f6-213c-4e6b-be32-207e2dd95409-etc-machine-id\") pod \"425d67f6-213c-4e6b-be32-207e2dd95409\" (UID: \"425d67f6-213c-4e6b-be32-207e2dd95409\") " Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.854604 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/425d67f6-213c-4e6b-be32-207e2dd95409-scripts\") pod \"425d67f6-213c-4e6b-be32-207e2dd95409\" (UID: \"425d67f6-213c-4e6b-be32-207e2dd95409\") " Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.854673 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ltv2\" (UniqueName: \"kubernetes.io/projected/425d67f6-213c-4e6b-be32-207e2dd95409-kube-api-access-7ltv2\") pod \"425d67f6-213c-4e6b-be32-207e2dd95409\" (UID: \"425d67f6-213c-4e6b-be32-207e2dd95409\") " Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.855490 4752 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/425d67f6-213c-4e6b-be32-207e2dd95409-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.856008 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/425d67f6-213c-4e6b-be32-207e2dd95409-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "425d67f6-213c-4e6b-be32-207e2dd95409" (UID: "425d67f6-213c-4e6b-be32-207e2dd95409"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.857996 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/425d67f6-213c-4e6b-be32-207e2dd95409-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "425d67f6-213c-4e6b-be32-207e2dd95409" (UID: "425d67f6-213c-4e6b-be32-207e2dd95409"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.862269 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/425d67f6-213c-4e6b-be32-207e2dd95409-scripts" (OuterVolumeSpecName: "scripts") pod "425d67f6-213c-4e6b-be32-207e2dd95409" (UID: "425d67f6-213c-4e6b-be32-207e2dd95409"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.865282 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/425d67f6-213c-4e6b-be32-207e2dd95409-kube-api-access-7ltv2" (OuterVolumeSpecName: "kube-api-access-7ltv2") pod "425d67f6-213c-4e6b-be32-207e2dd95409" (UID: "425d67f6-213c-4e6b-be32-207e2dd95409"). InnerVolumeSpecName "kube-api-access-7ltv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.895546 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/425d67f6-213c-4e6b-be32-207e2dd95409-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "425d67f6-213c-4e6b-be32-207e2dd95409" (UID: "425d67f6-213c-4e6b-be32-207e2dd95409"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.952072 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/425d67f6-213c-4e6b-be32-207e2dd95409-config-data" (OuterVolumeSpecName: "config-data") pod "425d67f6-213c-4e6b-be32-207e2dd95409" (UID: "425d67f6-213c-4e6b-be32-207e2dd95409"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.960680 4752 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/425d67f6-213c-4e6b-be32-207e2dd95409-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.960720 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/425d67f6-213c-4e6b-be32-207e2dd95409-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.960731 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/425d67f6-213c-4e6b-be32-207e2dd95409-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.960740 4752 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/425d67f6-213c-4e6b-be32-207e2dd95409-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.960749 4752 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/425d67f6-213c-4e6b-be32-207e2dd95409-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:16 crc kubenswrapper[4752]: I0122 10:45:16.960758 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ltv2\" (UniqueName: \"kubernetes.io/projected/425d67f6-213c-4e6b-be32-207e2dd95409-kube-api-access-7ltv2\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.014208 4752 generic.go:334] "Generic (PLEG): container finished" podID="425d67f6-213c-4e6b-be32-207e2dd95409" containerID="c2bd1913b64969529ed5ef25e08e180973c8b91c65dd8aacf44d02fff0e8f3cc" exitCode=0 Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.014246 4752 generic.go:334] "Generic (PLEG): container finished" podID="425d67f6-213c-4e6b-be32-207e2dd95409" containerID="0c398fd3ff9d620a766027ff54aa8be93d6a2b256981c1f362be966f531e5bf6" exitCode=143 Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.014290 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"425d67f6-213c-4e6b-be32-207e2dd95409","Type":"ContainerDied","Data":"c2bd1913b64969529ed5ef25e08e180973c8b91c65dd8aacf44d02fff0e8f3cc"} Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.014318 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"425d67f6-213c-4e6b-be32-207e2dd95409","Type":"ContainerDied","Data":"0c398fd3ff9d620a766027ff54aa8be93d6a2b256981c1f362be966f531e5bf6"} Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.014332 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"425d67f6-213c-4e6b-be32-207e2dd95409","Type":"ContainerDied","Data":"50cdefd18206ec2d0628cdb2b40c470840d8c0bb837a12410ac1595d05a37017"} Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.014350 4752 scope.go:117] "RemoveContainer" containerID="c2bd1913b64969529ed5ef25e08e180973c8b91c65dd8aacf44d02fff0e8f3cc" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.014404 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.027149 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5ccb469cdb-gdc7d" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.032096 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5ccb469cdb-gdc7d" event={"ID":"dad95ced-e983-4d80-8901-8cd6537337cf","Type":"ContainerDied","Data":"1c6df7deb156273057f4b8aec5fcd1f43c501ac54e37d38ecfcc8a67409022d6"} Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.041722 4752 scope.go:117] "RemoveContainer" containerID="0c398fd3ff9d620a766027ff54aa8be93d6a2b256981c1f362be966f531e5bf6" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.064961 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.083919 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.091650 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5ccb469cdb-gdc7d"] Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.094826 4752 scope.go:117] "RemoveContainer" containerID="c2bd1913b64969529ed5ef25e08e180973c8b91c65dd8aacf44d02fff0e8f3cc" Jan 22 10:45:17 crc kubenswrapper[4752]: E0122 10:45:17.095244 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2bd1913b64969529ed5ef25e08e180973c8b91c65dd8aacf44d02fff0e8f3cc\": container with ID starting with c2bd1913b64969529ed5ef25e08e180973c8b91c65dd8aacf44d02fff0e8f3cc not found: ID does not exist" containerID="c2bd1913b64969529ed5ef25e08e180973c8b91c65dd8aacf44d02fff0e8f3cc" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.095279 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2bd1913b64969529ed5ef25e08e180973c8b91c65dd8aacf44d02fff0e8f3cc"} err="failed to get container status \"c2bd1913b64969529ed5ef25e08e180973c8b91c65dd8aacf44d02fff0e8f3cc\": rpc error: code = NotFound desc = could not find container \"c2bd1913b64969529ed5ef25e08e180973c8b91c65dd8aacf44d02fff0e8f3cc\": container with ID starting with c2bd1913b64969529ed5ef25e08e180973c8b91c65dd8aacf44d02fff0e8f3cc not found: ID does not exist" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.095300 4752 scope.go:117] "RemoveContainer" containerID="0c398fd3ff9d620a766027ff54aa8be93d6a2b256981c1f362be966f531e5bf6" Jan 22 10:45:17 crc kubenswrapper[4752]: E0122 10:45:17.095528 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c398fd3ff9d620a766027ff54aa8be93d6a2b256981c1f362be966f531e5bf6\": container with ID starting with 0c398fd3ff9d620a766027ff54aa8be93d6a2b256981c1f362be966f531e5bf6 not found: ID does not exist" containerID="0c398fd3ff9d620a766027ff54aa8be93d6a2b256981c1f362be966f531e5bf6" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.095569 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c398fd3ff9d620a766027ff54aa8be93d6a2b256981c1f362be966f531e5bf6"} err="failed to get container status \"0c398fd3ff9d620a766027ff54aa8be93d6a2b256981c1f362be966f531e5bf6\": rpc error: code = NotFound desc = could not find container \"0c398fd3ff9d620a766027ff54aa8be93d6a2b256981c1f362be966f531e5bf6\": container with ID starting with 0c398fd3ff9d620a766027ff54aa8be93d6a2b256981c1f362be966f531e5bf6 not found: ID does not exist" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.095582 4752 scope.go:117] "RemoveContainer" containerID="c2bd1913b64969529ed5ef25e08e180973c8b91c65dd8aacf44d02fff0e8f3cc" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.095822 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2bd1913b64969529ed5ef25e08e180973c8b91c65dd8aacf44d02fff0e8f3cc"} err="failed to get container status \"c2bd1913b64969529ed5ef25e08e180973c8b91c65dd8aacf44d02fff0e8f3cc\": rpc error: code = NotFound desc = could not find container \"c2bd1913b64969529ed5ef25e08e180973c8b91c65dd8aacf44d02fff0e8f3cc\": container with ID starting with c2bd1913b64969529ed5ef25e08e180973c8b91c65dd8aacf44d02fff0e8f3cc not found: ID does not exist" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.095840 4752 scope.go:117] "RemoveContainer" containerID="0c398fd3ff9d620a766027ff54aa8be93d6a2b256981c1f362be966f531e5bf6" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.096169 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c398fd3ff9d620a766027ff54aa8be93d6a2b256981c1f362be966f531e5bf6"} err="failed to get container status \"0c398fd3ff9d620a766027ff54aa8be93d6a2b256981c1f362be966f531e5bf6\": rpc error: code = NotFound desc = could not find container \"0c398fd3ff9d620a766027ff54aa8be93d6a2b256981c1f362be966f531e5bf6\": container with ID starting with 0c398fd3ff9d620a766027ff54aa8be93d6a2b256981c1f362be966f531e5bf6 not found: ID does not exist" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.096220 4752 scope.go:117] "RemoveContainer" containerID="e703b6f9a9799ba9383ee40ee47c862b1adc85d96047aba84bd22897846565f3" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.125936 4752 scope.go:117] "RemoveContainer" containerID="851e342a903807eb9a64f951507bbc2f5895b4d331350d6874a0cb759d075578" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.127424 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="425d67f6-213c-4e6b-be32-207e2dd95409" path="/var/lib/kubelet/pods/425d67f6-213c-4e6b-be32-207e2dd95409/volumes" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.128040 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 22 10:45:17 crc kubenswrapper[4752]: E0122 10:45:17.128326 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3facab56-48f5-4f06-b879-86a9fb933537" containerName="horizon-log" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.128343 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="3facab56-48f5-4f06-b879-86a9fb933537" containerName="horizon-log" Jan 22 10:45:17 crc kubenswrapper[4752]: E0122 10:45:17.128352 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad95ced-e983-4d80-8901-8cd6537337cf" containerName="barbican-api" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.128359 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad95ced-e983-4d80-8901-8cd6537337cf" containerName="barbican-api" Jan 22 10:45:17 crc kubenswrapper[4752]: E0122 10:45:17.128375 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28a90e4a-ca62-4bd6-bfee-29cfda8b7478" containerName="horizon-log" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.128381 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="28a90e4a-ca62-4bd6-bfee-29cfda8b7478" containerName="horizon-log" Jan 22 10:45:17 crc kubenswrapper[4752]: E0122 10:45:17.128398 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3facab56-48f5-4f06-b879-86a9fb933537" containerName="horizon" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.128406 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="3facab56-48f5-4f06-b879-86a9fb933537" containerName="horizon" Jan 22 10:45:17 crc kubenswrapper[4752]: E0122 10:45:17.128418 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="425d67f6-213c-4e6b-be32-207e2dd95409" containerName="cinder-api" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.128424 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="425d67f6-213c-4e6b-be32-207e2dd95409" containerName="cinder-api" Jan 22 10:45:17 crc kubenswrapper[4752]: E0122 10:45:17.128434 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad95ced-e983-4d80-8901-8cd6537337cf" containerName="barbican-api-log" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.128440 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad95ced-e983-4d80-8901-8cd6537337cf" containerName="barbican-api-log" Jan 22 10:45:17 crc kubenswrapper[4752]: E0122 10:45:17.128457 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="425d67f6-213c-4e6b-be32-207e2dd95409" containerName="cinder-api-log" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.128463 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="425d67f6-213c-4e6b-be32-207e2dd95409" containerName="cinder-api-log" Jan 22 10:45:17 crc kubenswrapper[4752]: E0122 10:45:17.128480 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28a90e4a-ca62-4bd6-bfee-29cfda8b7478" containerName="horizon" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.128486 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="28a90e4a-ca62-4bd6-bfee-29cfda8b7478" containerName="horizon" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.128650 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="425d67f6-213c-4e6b-be32-207e2dd95409" containerName="cinder-api-log" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.128663 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="28a90e4a-ca62-4bd6-bfee-29cfda8b7478" containerName="horizon-log" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.128671 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="3facab56-48f5-4f06-b879-86a9fb933537" containerName="horizon" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.128679 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="dad95ced-e983-4d80-8901-8cd6537337cf" containerName="barbican-api-log" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.128689 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="3facab56-48f5-4f06-b879-86a9fb933537" containerName="horizon-log" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.128698 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="28a90e4a-ca62-4bd6-bfee-29cfda8b7478" containerName="horizon" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.128711 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="dad95ced-e983-4d80-8901-8cd6537337cf" containerName="barbican-api" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.128720 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="425d67f6-213c-4e6b-be32-207e2dd95409" containerName="cinder-api" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.129700 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5ccb469cdb-gdc7d"] Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.129808 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.136108 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.143194 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.143360 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.143376 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.267113 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq2tb\" (UniqueName: \"kubernetes.io/projected/2e82570d-27c0-44b3-be04-21c1926ff784-kube-api-access-cq2tb\") pod \"cinder-api-0\" (UID: \"2e82570d-27c0-44b3-be04-21c1926ff784\") " pod="openstack/cinder-api-0" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.267168 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e82570d-27c0-44b3-be04-21c1926ff784-logs\") pod \"cinder-api-0\" (UID: \"2e82570d-27c0-44b3-be04-21c1926ff784\") " pod="openstack/cinder-api-0" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.267204 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e82570d-27c0-44b3-be04-21c1926ff784-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2e82570d-27c0-44b3-be04-21c1926ff784\") " pod="openstack/cinder-api-0" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.267232 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e82570d-27c0-44b3-be04-21c1926ff784-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2e82570d-27c0-44b3-be04-21c1926ff784\") " pod="openstack/cinder-api-0" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.267322 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e82570d-27c0-44b3-be04-21c1926ff784-config-data\") pod \"cinder-api-0\" (UID: \"2e82570d-27c0-44b3-be04-21c1926ff784\") " pod="openstack/cinder-api-0" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.267386 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e82570d-27c0-44b3-be04-21c1926ff784-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2e82570d-27c0-44b3-be04-21c1926ff784\") " pod="openstack/cinder-api-0" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.267434 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e82570d-27c0-44b3-be04-21c1926ff784-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2e82570d-27c0-44b3-be04-21c1926ff784\") " pod="openstack/cinder-api-0" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.267456 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e82570d-27c0-44b3-be04-21c1926ff784-config-data-custom\") pod \"cinder-api-0\" (UID: \"2e82570d-27c0-44b3-be04-21c1926ff784\") " pod="openstack/cinder-api-0" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.267502 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e82570d-27c0-44b3-be04-21c1926ff784-scripts\") pod \"cinder-api-0\" (UID: \"2e82570d-27c0-44b3-be04-21c1926ff784\") " pod="openstack/cinder-api-0" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.369140 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq2tb\" (UniqueName: \"kubernetes.io/projected/2e82570d-27c0-44b3-be04-21c1926ff784-kube-api-access-cq2tb\") pod \"cinder-api-0\" (UID: \"2e82570d-27c0-44b3-be04-21c1926ff784\") " pod="openstack/cinder-api-0" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.369191 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e82570d-27c0-44b3-be04-21c1926ff784-logs\") pod \"cinder-api-0\" (UID: \"2e82570d-27c0-44b3-be04-21c1926ff784\") " pod="openstack/cinder-api-0" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.369221 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e82570d-27c0-44b3-be04-21c1926ff784-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2e82570d-27c0-44b3-be04-21c1926ff784\") " pod="openstack/cinder-api-0" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.369248 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e82570d-27c0-44b3-be04-21c1926ff784-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2e82570d-27c0-44b3-be04-21c1926ff784\") " pod="openstack/cinder-api-0" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.369285 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e82570d-27c0-44b3-be04-21c1926ff784-config-data\") pod \"cinder-api-0\" (UID: \"2e82570d-27c0-44b3-be04-21c1926ff784\") " pod="openstack/cinder-api-0" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.369329 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e82570d-27c0-44b3-be04-21c1926ff784-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2e82570d-27c0-44b3-be04-21c1926ff784\") " pod="openstack/cinder-api-0" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.369366 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e82570d-27c0-44b3-be04-21c1926ff784-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2e82570d-27c0-44b3-be04-21c1926ff784\") " pod="openstack/cinder-api-0" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.369387 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e82570d-27c0-44b3-be04-21c1926ff784-config-data-custom\") pod \"cinder-api-0\" (UID: \"2e82570d-27c0-44b3-be04-21c1926ff784\") " pod="openstack/cinder-api-0" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.369418 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e82570d-27c0-44b3-be04-21c1926ff784-scripts\") pod \"cinder-api-0\" (UID: \"2e82570d-27c0-44b3-be04-21c1926ff784\") " pod="openstack/cinder-api-0" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.370076 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e82570d-27c0-44b3-be04-21c1926ff784-logs\") pod \"cinder-api-0\" (UID: \"2e82570d-27c0-44b3-be04-21c1926ff784\") " pod="openstack/cinder-api-0" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.370141 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e82570d-27c0-44b3-be04-21c1926ff784-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2e82570d-27c0-44b3-be04-21c1926ff784\") " pod="openstack/cinder-api-0" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.373433 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e82570d-27c0-44b3-be04-21c1926ff784-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2e82570d-27c0-44b3-be04-21c1926ff784\") " pod="openstack/cinder-api-0" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.373739 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e82570d-27c0-44b3-be04-21c1926ff784-config-data\") pod \"cinder-api-0\" (UID: \"2e82570d-27c0-44b3-be04-21c1926ff784\") " pod="openstack/cinder-api-0" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.373982 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e82570d-27c0-44b3-be04-21c1926ff784-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2e82570d-27c0-44b3-be04-21c1926ff784\") " pod="openstack/cinder-api-0" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.374111 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e82570d-27c0-44b3-be04-21c1926ff784-scripts\") pod \"cinder-api-0\" (UID: \"2e82570d-27c0-44b3-be04-21c1926ff784\") " pod="openstack/cinder-api-0" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.375618 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e82570d-27c0-44b3-be04-21c1926ff784-config-data-custom\") pod \"cinder-api-0\" (UID: \"2e82570d-27c0-44b3-be04-21c1926ff784\") " pod="openstack/cinder-api-0" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.375828 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e82570d-27c0-44b3-be04-21c1926ff784-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2e82570d-27c0-44b3-be04-21c1926ff784\") " pod="openstack/cinder-api-0" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.391041 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq2tb\" (UniqueName: \"kubernetes.io/projected/2e82570d-27c0-44b3-be04-21c1926ff784-kube-api-access-cq2tb\") pod \"cinder-api-0\" (UID: \"2e82570d-27c0-44b3-be04-21c1926ff784\") " pod="openstack/cinder-api-0" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.448027 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 10:45:17 crc kubenswrapper[4752]: I0122 10:45:17.912549 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 22 10:45:17 crc kubenswrapper[4752]: W0122 10:45:17.915806 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e82570d_27c0_44b3_be04_21c1926ff784.slice/crio-61e051ac6f0d9aafbbfeb78492348176753818fa9a36eea3d87bfeb0948b862c WatchSource:0}: Error finding container 61e051ac6f0d9aafbbfeb78492348176753818fa9a36eea3d87bfeb0948b862c: Status 404 returned error can't find the container with id 61e051ac6f0d9aafbbfeb78492348176753818fa9a36eea3d87bfeb0948b862c Jan 22 10:45:18 crc kubenswrapper[4752]: I0122 10:45:18.036018 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2e82570d-27c0-44b3-be04-21c1926ff784","Type":"ContainerStarted","Data":"61e051ac6f0d9aafbbfeb78492348176753818fa9a36eea3d87bfeb0948b862c"} Jan 22 10:45:18 crc kubenswrapper[4752]: I0122 10:45:18.041061 4752 generic.go:334] "Generic (PLEG): container finished" podID="5ba449ad-098c-4918-9403-750b0c29ee93" containerID="9aadbd246544b4c33e87afdab4b26ff2bfd1c7d8c83899d5519375d8b2cfa6d6" exitCode=0 Jan 22 10:45:18 crc kubenswrapper[4752]: I0122 10:45:18.041106 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c77556c9d-7cqmw" event={"ID":"5ba449ad-098c-4918-9403-750b0c29ee93","Type":"ContainerDied","Data":"9aadbd246544b4c33e87afdab4b26ff2bfd1c7d8c83899d5519375d8b2cfa6d6"} Jan 22 10:45:19 crc kubenswrapper[4752]: I0122 10:45:19.058191 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2e82570d-27c0-44b3-be04-21c1926ff784","Type":"ContainerStarted","Data":"8a82cd4cb9602fcd3127f2dc72ad79c8d963dbd0396b50d9824b2b6d650dea92"} Jan 22 10:45:19 crc kubenswrapper[4752]: I0122 10:45:19.112725 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dad95ced-e983-4d80-8901-8cd6537337cf" path="/var/lib/kubelet/pods/dad95ced-e983-4d80-8901-8cd6537337cf/volumes" Jan 22 10:45:20 crc kubenswrapper[4752]: I0122 10:45:20.069662 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2e82570d-27c0-44b3-be04-21c1926ff784","Type":"ContainerStarted","Data":"a55baed6976f31d66a4a99b8ce6093db16200656b774b1fc37d7e42530ef38cf"} Jan 22 10:45:20 crc kubenswrapper[4752]: I0122 10:45:20.070191 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 22 10:45:20 crc kubenswrapper[4752]: I0122 10:45:20.092531 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.092507544 podStartE2EDuration="3.092507544s" podCreationTimestamp="2026-01-22 10:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:45:20.08565576 +0000 UTC m=+1199.315598678" watchObservedRunningTime="2026-01-22 10:45:20.092507544 +0000 UTC m=+1199.322450452" Jan 22 10:45:20 crc kubenswrapper[4752]: I0122 10:45:20.163475 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7c77556c9d-7cqmw" podUID="5ba449ad-098c-4918-9403-750b0c29ee93" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.164:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.164:8443: connect: connection refused" Jan 22 10:45:20 crc kubenswrapper[4752]: I0122 10:45:20.919719 4752 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 22 10:45:20 crc kubenswrapper[4752]: I0122 10:45:20.920471 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 22 10:45:20 crc kubenswrapper[4752]: I0122 10:45:20.921499 4752 scope.go:117] "RemoveContainer" containerID="850c9676da86d08770647f31e68bb12eb2beed371f768b333eb96fbd8db40f16" Jan 22 10:45:21 crc kubenswrapper[4752]: I0122 10:45:21.273168 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 22 10:45:21 crc kubenswrapper[4752]: I0122 10:45:21.315002 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85c94b455f-t6lr7" Jan 22 10:45:21 crc kubenswrapper[4752]: I0122 10:45:21.382155 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 10:45:21 crc kubenswrapper[4752]: I0122 10:45:21.419626 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f5db4cc5-c4kxs"] Jan 22 10:45:21 crc kubenswrapper[4752]: I0122 10:45:21.420552 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f5db4cc5-c4kxs" podUID="ce2ad46c-0d8c-4c7f-8025-89493e874a85" containerName="dnsmasq-dns" containerID="cri-o://ab7594ee27c101cbc148576b984b28d92e7021f4d1ccef5b829315b0db00d6b7" gracePeriod=10 Jan 22 10:45:22 crc kubenswrapper[4752]: I0122 10:45:22.030590 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f5db4cc5-c4kxs" Jan 22 10:45:22 crc kubenswrapper[4752]: I0122 10:45:22.094563 4752 generic.go:334] "Generic (PLEG): container finished" podID="ce2ad46c-0d8c-4c7f-8025-89493e874a85" containerID="ab7594ee27c101cbc148576b984b28d92e7021f4d1ccef5b829315b0db00d6b7" exitCode=0 Jan 22 10:45:22 crc kubenswrapper[4752]: I0122 10:45:22.094653 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f5db4cc5-c4kxs" event={"ID":"ce2ad46c-0d8c-4c7f-8025-89493e874a85","Type":"ContainerDied","Data":"ab7594ee27c101cbc148576b984b28d92e7021f4d1ccef5b829315b0db00d6b7"} Jan 22 10:45:22 crc kubenswrapper[4752]: I0122 10:45:22.094738 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f5db4cc5-c4kxs" event={"ID":"ce2ad46c-0d8c-4c7f-8025-89493e874a85","Type":"ContainerDied","Data":"22d3d396fb8823cb94dbb8353b92a55f705d74ac6b7b673839a6d674bc180894"} Jan 22 10:45:22 crc kubenswrapper[4752]: I0122 10:45:22.094762 4752 scope.go:117] "RemoveContainer" containerID="ab7594ee27c101cbc148576b984b28d92e7021f4d1ccef5b829315b0db00d6b7" Jan 22 10:45:22 crc kubenswrapper[4752]: I0122 10:45:22.094971 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f5db4cc5-c4kxs" Jan 22 10:45:22 crc kubenswrapper[4752]: I0122 10:45:22.108907 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="6d930b78-d90f-4db7-b354-8fd6c13799cc" containerName="cinder-scheduler" containerID="cri-o://ed8dd68e69dd3c863cb64200db90146e0d36460146a2aacaccd419733aa8e462" gracePeriod=30 Jan 22 10:45:22 crc kubenswrapper[4752]: I0122 10:45:22.109487 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"806e176d-686f-4523-822c-f519f6a6076d","Type":"ContainerStarted","Data":"a716c6a764ce27358ef039d89ad40c367c143fe72cf879f9e4ce33e32e847757"} Jan 22 10:45:22 crc kubenswrapper[4752]: I0122 10:45:22.111486 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="6d930b78-d90f-4db7-b354-8fd6c13799cc" containerName="probe" containerID="cri-o://560d059f913c7ff1354ef962ab43054198779c6e372e95129a6f846dc316710d" gracePeriod=30 Jan 22 10:45:22 crc kubenswrapper[4752]: I0122 10:45:22.148151 4752 scope.go:117] "RemoveContainer" containerID="44b6b877267f995bd3d6c2a94cc6de7dbb34e13e3602a48db0489dd36f2f9062" Jan 22 10:45:22 crc kubenswrapper[4752]: I0122 10:45:22.188949 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce2ad46c-0d8c-4c7f-8025-89493e874a85-ovsdbserver-nb\") pod \"ce2ad46c-0d8c-4c7f-8025-89493e874a85\" (UID: \"ce2ad46c-0d8c-4c7f-8025-89493e874a85\") " Jan 22 10:45:22 crc kubenswrapper[4752]: I0122 10:45:22.189158 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce2ad46c-0d8c-4c7f-8025-89493e874a85-dns-swift-storage-0\") pod \"ce2ad46c-0d8c-4c7f-8025-89493e874a85\" (UID: \"ce2ad46c-0d8c-4c7f-8025-89493e874a85\") " Jan 22 10:45:22 crc kubenswrapper[4752]: I0122 10:45:22.189520 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce2ad46c-0d8c-4c7f-8025-89493e874a85-config\") pod \"ce2ad46c-0d8c-4c7f-8025-89493e874a85\" (UID: \"ce2ad46c-0d8c-4c7f-8025-89493e874a85\") " Jan 22 10:45:22 crc kubenswrapper[4752]: I0122 10:45:22.189627 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce2ad46c-0d8c-4c7f-8025-89493e874a85-ovsdbserver-sb\") pod \"ce2ad46c-0d8c-4c7f-8025-89493e874a85\" (UID: \"ce2ad46c-0d8c-4c7f-8025-89493e874a85\") " Jan 22 10:45:22 crc kubenswrapper[4752]: I0122 10:45:22.189723 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv82b\" (UniqueName: \"kubernetes.io/projected/ce2ad46c-0d8c-4c7f-8025-89493e874a85-kube-api-access-xv82b\") pod \"ce2ad46c-0d8c-4c7f-8025-89493e874a85\" (UID: \"ce2ad46c-0d8c-4c7f-8025-89493e874a85\") " Jan 22 10:45:22 crc kubenswrapper[4752]: I0122 10:45:22.189805 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce2ad46c-0d8c-4c7f-8025-89493e874a85-dns-svc\") pod \"ce2ad46c-0d8c-4c7f-8025-89493e874a85\" (UID: \"ce2ad46c-0d8c-4c7f-8025-89493e874a85\") " Jan 22 10:45:22 crc kubenswrapper[4752]: I0122 10:45:22.189842 4752 scope.go:117] "RemoveContainer" containerID="ab7594ee27c101cbc148576b984b28d92e7021f4d1ccef5b829315b0db00d6b7" Jan 22 10:45:22 crc kubenswrapper[4752]: E0122 10:45:22.193041 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab7594ee27c101cbc148576b984b28d92e7021f4d1ccef5b829315b0db00d6b7\": container with ID starting with ab7594ee27c101cbc148576b984b28d92e7021f4d1ccef5b829315b0db00d6b7 not found: ID does not exist" containerID="ab7594ee27c101cbc148576b984b28d92e7021f4d1ccef5b829315b0db00d6b7" Jan 22 10:45:22 crc kubenswrapper[4752]: I0122 10:45:22.193090 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab7594ee27c101cbc148576b984b28d92e7021f4d1ccef5b829315b0db00d6b7"} err="failed to get container status \"ab7594ee27c101cbc148576b984b28d92e7021f4d1ccef5b829315b0db00d6b7\": rpc error: code = NotFound desc = could not find container \"ab7594ee27c101cbc148576b984b28d92e7021f4d1ccef5b829315b0db00d6b7\": container with ID starting with ab7594ee27c101cbc148576b984b28d92e7021f4d1ccef5b829315b0db00d6b7 not found: ID does not exist" Jan 22 10:45:22 crc kubenswrapper[4752]: I0122 10:45:22.193116 4752 scope.go:117] "RemoveContainer" containerID="44b6b877267f995bd3d6c2a94cc6de7dbb34e13e3602a48db0489dd36f2f9062" Jan 22 10:45:22 crc kubenswrapper[4752]: E0122 10:45:22.196011 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44b6b877267f995bd3d6c2a94cc6de7dbb34e13e3602a48db0489dd36f2f9062\": container with ID starting with 44b6b877267f995bd3d6c2a94cc6de7dbb34e13e3602a48db0489dd36f2f9062 not found: ID does not exist" containerID="44b6b877267f995bd3d6c2a94cc6de7dbb34e13e3602a48db0489dd36f2f9062" Jan 22 10:45:22 crc kubenswrapper[4752]: I0122 10:45:22.196060 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44b6b877267f995bd3d6c2a94cc6de7dbb34e13e3602a48db0489dd36f2f9062"} err="failed to get container status \"44b6b877267f995bd3d6c2a94cc6de7dbb34e13e3602a48db0489dd36f2f9062\": rpc error: code = NotFound desc = could not find container \"44b6b877267f995bd3d6c2a94cc6de7dbb34e13e3602a48db0489dd36f2f9062\": container with ID starting with 44b6b877267f995bd3d6c2a94cc6de7dbb34e13e3602a48db0489dd36f2f9062 not found: ID does not exist" Jan 22 10:45:22 crc kubenswrapper[4752]: I0122 10:45:22.207205 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce2ad46c-0d8c-4c7f-8025-89493e874a85-kube-api-access-xv82b" (OuterVolumeSpecName: "kube-api-access-xv82b") pod "ce2ad46c-0d8c-4c7f-8025-89493e874a85" (UID: "ce2ad46c-0d8c-4c7f-8025-89493e874a85"). InnerVolumeSpecName "kube-api-access-xv82b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:45:22 crc kubenswrapper[4752]: I0122 10:45:22.259147 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce2ad46c-0d8c-4c7f-8025-89493e874a85-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ce2ad46c-0d8c-4c7f-8025-89493e874a85" (UID: "ce2ad46c-0d8c-4c7f-8025-89493e874a85"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:45:22 crc kubenswrapper[4752]: I0122 10:45:22.271432 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce2ad46c-0d8c-4c7f-8025-89493e874a85-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ce2ad46c-0d8c-4c7f-8025-89493e874a85" (UID: "ce2ad46c-0d8c-4c7f-8025-89493e874a85"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:45:22 crc kubenswrapper[4752]: I0122 10:45:22.271459 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce2ad46c-0d8c-4c7f-8025-89493e874a85-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ce2ad46c-0d8c-4c7f-8025-89493e874a85" (UID: "ce2ad46c-0d8c-4c7f-8025-89493e874a85"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:45:22 crc kubenswrapper[4752]: I0122 10:45:22.277309 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce2ad46c-0d8c-4c7f-8025-89493e874a85-config" (OuterVolumeSpecName: "config") pod "ce2ad46c-0d8c-4c7f-8025-89493e874a85" (UID: "ce2ad46c-0d8c-4c7f-8025-89493e874a85"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:45:22 crc kubenswrapper[4752]: I0122 10:45:22.283359 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce2ad46c-0d8c-4c7f-8025-89493e874a85-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ce2ad46c-0d8c-4c7f-8025-89493e874a85" (UID: "ce2ad46c-0d8c-4c7f-8025-89493e874a85"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:45:22 crc kubenswrapper[4752]: I0122 10:45:22.292566 4752 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce2ad46c-0d8c-4c7f-8025-89493e874a85-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:22 crc kubenswrapper[4752]: I0122 10:45:22.292599 4752 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce2ad46c-0d8c-4c7f-8025-89493e874a85-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:22 crc kubenswrapper[4752]: I0122 10:45:22.292609 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce2ad46c-0d8c-4c7f-8025-89493e874a85-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:22 crc kubenswrapper[4752]: I0122 10:45:22.292617 4752 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce2ad46c-0d8c-4c7f-8025-89493e874a85-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:22 crc kubenswrapper[4752]: I0122 10:45:22.292626 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv82b\" (UniqueName: \"kubernetes.io/projected/ce2ad46c-0d8c-4c7f-8025-89493e874a85-kube-api-access-xv82b\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:22 crc kubenswrapper[4752]: I0122 10:45:22.292635 4752 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce2ad46c-0d8c-4c7f-8025-89493e874a85-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:22 crc kubenswrapper[4752]: I0122 10:45:22.426444 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f5db4cc5-c4kxs"] Jan 22 10:45:22 crc kubenswrapper[4752]: I0122 10:45:22.434056 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f5db4cc5-c4kxs"] Jan 22 10:45:23 crc kubenswrapper[4752]: I0122 10:45:23.112083 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce2ad46c-0d8c-4c7f-8025-89493e874a85" path="/var/lib/kubelet/pods/ce2ad46c-0d8c-4c7f-8025-89493e874a85/volumes" Jan 22 10:45:23 crc kubenswrapper[4752]: I0122 10:45:23.125441 4752 generic.go:334] "Generic (PLEG): container finished" podID="6d930b78-d90f-4db7-b354-8fd6c13799cc" containerID="560d059f913c7ff1354ef962ab43054198779c6e372e95129a6f846dc316710d" exitCode=0 Jan 22 10:45:23 crc kubenswrapper[4752]: I0122 10:45:23.125493 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6d930b78-d90f-4db7-b354-8fd6c13799cc","Type":"ContainerDied","Data":"560d059f913c7ff1354ef962ab43054198779c6e372e95129a6f846dc316710d"} Jan 22 10:45:24 crc kubenswrapper[4752]: I0122 10:45:24.371702 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-59bf4d5494-h8d46" Jan 22 10:45:24 crc kubenswrapper[4752]: I0122 10:45:24.710321 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5847fc978b-f7xwp" Jan 22 10:45:25 crc kubenswrapper[4752]: I0122 10:45:25.192536 4752 generic.go:334] "Generic (PLEG): container finished" podID="806e176d-686f-4523-822c-f519f6a6076d" containerID="a716c6a764ce27358ef039d89ad40c367c143fe72cf879f9e4ce33e32e847757" exitCode=1 Jan 22 10:45:25 crc kubenswrapper[4752]: I0122 10:45:25.192585 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"806e176d-686f-4523-822c-f519f6a6076d","Type":"ContainerDied","Data":"a716c6a764ce27358ef039d89ad40c367c143fe72cf879f9e4ce33e32e847757"} Jan 22 10:45:25 crc kubenswrapper[4752]: I0122 10:45:25.192624 4752 scope.go:117] "RemoveContainer" containerID="850c9676da86d08770647f31e68bb12eb2beed371f768b333eb96fbd8db40f16" Jan 22 10:45:25 crc kubenswrapper[4752]: I0122 10:45:25.193144 4752 scope.go:117] "RemoveContainer" containerID="a716c6a764ce27358ef039d89ad40c367c143fe72cf879f9e4ce33e32e847757" Jan 22 10:45:25 crc kubenswrapper[4752]: E0122 10:45:25.193460 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(806e176d-686f-4523-822c-f519f6a6076d)\"" pod="openstack/watcher-decision-engine-0" podUID="806e176d-686f-4523-822c-f519f6a6076d" Jan 22 10:45:26 crc kubenswrapper[4752]: I0122 10:45:26.570830 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6796b596df-mjdp9" Jan 22 10:45:26 crc kubenswrapper[4752]: I0122 10:45:26.663321 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-59bf4d5494-h8d46"] Jan 22 10:45:26 crc kubenswrapper[4752]: I0122 10:45:26.663593 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-59bf4d5494-h8d46" podUID="4cecada1-c407-4aff-83a3-9af1f6a94efa" containerName="neutron-api" containerID="cri-o://d18f57d26752f629bc27360b054caa10c4aa7226f020ad68c748685e6f8cd063" gracePeriod=30 Jan 22 10:45:26 crc kubenswrapper[4752]: I0122 10:45:26.664336 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-59bf4d5494-h8d46" podUID="4cecada1-c407-4aff-83a3-9af1f6a94efa" containerName="neutron-httpd" containerID="cri-o://4fad7817e4352e2c73685b768f354419e94b86c1af96f13854a329ab0fb5f154" gracePeriod=30 Jan 22 10:45:27 crc kubenswrapper[4752]: I0122 10:45:27.724618 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:45:27 crc kubenswrapper[4752]: I0122 10:45:27.725070 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:45:28 crc kubenswrapper[4752]: I0122 10:45:28.013100 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 22 10:45:28 crc kubenswrapper[4752]: E0122 10:45:28.014788 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce2ad46c-0d8c-4c7f-8025-89493e874a85" containerName="init" Jan 22 10:45:28 crc kubenswrapper[4752]: I0122 10:45:28.014817 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce2ad46c-0d8c-4c7f-8025-89493e874a85" containerName="init" Jan 22 10:45:28 crc kubenswrapper[4752]: E0122 10:45:28.014843 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce2ad46c-0d8c-4c7f-8025-89493e874a85" containerName="dnsmasq-dns" Jan 22 10:45:28 crc kubenswrapper[4752]: I0122 10:45:28.014864 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce2ad46c-0d8c-4c7f-8025-89493e874a85" containerName="dnsmasq-dns" Jan 22 10:45:28 crc kubenswrapper[4752]: I0122 10:45:28.015052 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce2ad46c-0d8c-4c7f-8025-89493e874a85" containerName="dnsmasq-dns" Jan 22 10:45:28 crc kubenswrapper[4752]: I0122 10:45:28.015715 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 22 10:45:28 crc kubenswrapper[4752]: I0122 10:45:28.021608 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 22 10:45:28 crc kubenswrapper[4752]: I0122 10:45:28.022027 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 22 10:45:28 crc kubenswrapper[4752]: I0122 10:45:28.022319 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-tv7jj" Jan 22 10:45:28 crc kubenswrapper[4752]: I0122 10:45:28.036884 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 22 10:45:28 crc kubenswrapper[4752]: I0122 10:45:28.149561 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/245b2ea0-58e6-4729-b41a-2816e37e0e2d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"245b2ea0-58e6-4729-b41a-2816e37e0e2d\") " pod="openstack/openstackclient" Jan 22 10:45:28 crc kubenswrapper[4752]: I0122 10:45:28.149632 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/245b2ea0-58e6-4729-b41a-2816e37e0e2d-openstack-config-secret\") pod \"openstackclient\" (UID: \"245b2ea0-58e6-4729-b41a-2816e37e0e2d\") " pod="openstack/openstackclient" Jan 22 10:45:28 crc kubenswrapper[4752]: I0122 10:45:28.149655 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/245b2ea0-58e6-4729-b41a-2816e37e0e2d-openstack-config\") pod \"openstackclient\" (UID: \"245b2ea0-58e6-4729-b41a-2816e37e0e2d\") " pod="openstack/openstackclient" Jan 22 10:45:28 crc kubenswrapper[4752]: I0122 10:45:28.149718 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfdgs\" (UniqueName: \"kubernetes.io/projected/245b2ea0-58e6-4729-b41a-2816e37e0e2d-kube-api-access-mfdgs\") pod \"openstackclient\" (UID: \"245b2ea0-58e6-4729-b41a-2816e37e0e2d\") " pod="openstack/openstackclient" Jan 22 10:45:28 crc kubenswrapper[4752]: I0122 10:45:28.231735 4752 generic.go:334] "Generic (PLEG): container finished" podID="4cecada1-c407-4aff-83a3-9af1f6a94efa" containerID="4fad7817e4352e2c73685b768f354419e94b86c1af96f13854a329ab0fb5f154" exitCode=0 Jan 22 10:45:28 crc kubenswrapper[4752]: I0122 10:45:28.231823 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59bf4d5494-h8d46" event={"ID":"4cecada1-c407-4aff-83a3-9af1f6a94efa","Type":"ContainerDied","Data":"4fad7817e4352e2c73685b768f354419e94b86c1af96f13854a329ab0fb5f154"} Jan 22 10:45:28 crc kubenswrapper[4752]: I0122 10:45:28.251968 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/245b2ea0-58e6-4729-b41a-2816e37e0e2d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"245b2ea0-58e6-4729-b41a-2816e37e0e2d\") " pod="openstack/openstackclient" Jan 22 10:45:28 crc kubenswrapper[4752]: I0122 10:45:28.252044 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/245b2ea0-58e6-4729-b41a-2816e37e0e2d-openstack-config-secret\") pod \"openstackclient\" (UID: \"245b2ea0-58e6-4729-b41a-2816e37e0e2d\") " pod="openstack/openstackclient" Jan 22 10:45:28 crc kubenswrapper[4752]: I0122 10:45:28.252076 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/245b2ea0-58e6-4729-b41a-2816e37e0e2d-openstack-config\") pod \"openstackclient\" (UID: \"245b2ea0-58e6-4729-b41a-2816e37e0e2d\") " pod="openstack/openstackclient" Jan 22 10:45:28 crc kubenswrapper[4752]: I0122 10:45:28.252107 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfdgs\" (UniqueName: \"kubernetes.io/projected/245b2ea0-58e6-4729-b41a-2816e37e0e2d-kube-api-access-mfdgs\") pod \"openstackclient\" (UID: \"245b2ea0-58e6-4729-b41a-2816e37e0e2d\") " pod="openstack/openstackclient" Jan 22 10:45:28 crc kubenswrapper[4752]: I0122 10:45:28.253535 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/245b2ea0-58e6-4729-b41a-2816e37e0e2d-openstack-config\") pod \"openstackclient\" (UID: \"245b2ea0-58e6-4729-b41a-2816e37e0e2d\") " pod="openstack/openstackclient" Jan 22 10:45:28 crc kubenswrapper[4752]: I0122 10:45:28.262023 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/245b2ea0-58e6-4729-b41a-2816e37e0e2d-openstack-config-secret\") pod \"openstackclient\" (UID: \"245b2ea0-58e6-4729-b41a-2816e37e0e2d\") " pod="openstack/openstackclient" Jan 22 10:45:28 crc kubenswrapper[4752]: I0122 10:45:28.269479 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/245b2ea0-58e6-4729-b41a-2816e37e0e2d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"245b2ea0-58e6-4729-b41a-2816e37e0e2d\") " pod="openstack/openstackclient" Jan 22 10:45:28 crc kubenswrapper[4752]: I0122 10:45:28.271701 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfdgs\" (UniqueName: \"kubernetes.io/projected/245b2ea0-58e6-4729-b41a-2816e37e0e2d-kube-api-access-mfdgs\") pod \"openstackclient\" (UID: \"245b2ea0-58e6-4729-b41a-2816e37e0e2d\") " pod="openstack/openstackclient" Jan 22 10:45:28 crc kubenswrapper[4752]: I0122 10:45:28.336953 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 22 10:45:28 crc kubenswrapper[4752]: I0122 10:45:28.908366 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 22 10:45:28 crc kubenswrapper[4752]: W0122 10:45:28.923823 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod245b2ea0_58e6_4729_b41a_2816e37e0e2d.slice/crio-9a1dfe9e027e69ffe0d1c81b507e16ed5d50493b39e6d29837a90bf5ce53971f WatchSource:0}: Error finding container 9a1dfe9e027e69ffe0d1c81b507e16ed5d50493b39e6d29837a90bf5ce53971f: Status 404 returned error can't find the container with id 9a1dfe9e027e69ffe0d1c81b507e16ed5d50493b39e6d29837a90bf5ce53971f Jan 22 10:45:29 crc kubenswrapper[4752]: I0122 10:45:29.252191 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"245b2ea0-58e6-4729-b41a-2816e37e0e2d","Type":"ContainerStarted","Data":"9a1dfe9e027e69ffe0d1c81b507e16ed5d50493b39e6d29837a90bf5ce53971f"} Jan 22 10:45:29 crc kubenswrapper[4752]: I0122 10:45:29.326412 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-86f449cb4d-x9x9z" Jan 22 10:45:29 crc kubenswrapper[4752]: I0122 10:45:29.345810 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-86f449cb4d-x9x9z" Jan 22 10:45:29 crc kubenswrapper[4752]: I0122 10:45:29.828440 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 22 10:45:29 crc kubenswrapper[4752]: I0122 10:45:29.832742 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 10:45:29 crc kubenswrapper[4752]: I0122 10:45:29.913218 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d930b78-d90f-4db7-b354-8fd6c13799cc-config-data\") pod \"6d930b78-d90f-4db7-b354-8fd6c13799cc\" (UID: \"6d930b78-d90f-4db7-b354-8fd6c13799cc\") " Jan 22 10:45:29 crc kubenswrapper[4752]: I0122 10:45:29.913300 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d930b78-d90f-4db7-b354-8fd6c13799cc-scripts\") pod \"6d930b78-d90f-4db7-b354-8fd6c13799cc\" (UID: \"6d930b78-d90f-4db7-b354-8fd6c13799cc\") " Jan 22 10:45:29 crc kubenswrapper[4752]: I0122 10:45:29.913369 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d930b78-d90f-4db7-b354-8fd6c13799cc-combined-ca-bundle\") pod \"6d930b78-d90f-4db7-b354-8fd6c13799cc\" (UID: \"6d930b78-d90f-4db7-b354-8fd6c13799cc\") " Jan 22 10:45:29 crc kubenswrapper[4752]: I0122 10:45:29.913540 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqm4j\" (UniqueName: \"kubernetes.io/projected/6d930b78-d90f-4db7-b354-8fd6c13799cc-kube-api-access-dqm4j\") pod \"6d930b78-d90f-4db7-b354-8fd6c13799cc\" (UID: \"6d930b78-d90f-4db7-b354-8fd6c13799cc\") " Jan 22 10:45:29 crc kubenswrapper[4752]: I0122 10:45:29.913565 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d930b78-d90f-4db7-b354-8fd6c13799cc-config-data-custom\") pod \"6d930b78-d90f-4db7-b354-8fd6c13799cc\" (UID: \"6d930b78-d90f-4db7-b354-8fd6c13799cc\") " Jan 22 10:45:29 crc kubenswrapper[4752]: I0122 10:45:29.913597 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6d930b78-d90f-4db7-b354-8fd6c13799cc-etc-machine-id\") pod \"6d930b78-d90f-4db7-b354-8fd6c13799cc\" (UID: \"6d930b78-d90f-4db7-b354-8fd6c13799cc\") " Jan 22 10:45:29 crc kubenswrapper[4752]: I0122 10:45:29.923928 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d930b78-d90f-4db7-b354-8fd6c13799cc-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6d930b78-d90f-4db7-b354-8fd6c13799cc" (UID: "6d930b78-d90f-4db7-b354-8fd6c13799cc"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 10:45:29 crc kubenswrapper[4752]: I0122 10:45:29.929245 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d930b78-d90f-4db7-b354-8fd6c13799cc-scripts" (OuterVolumeSpecName: "scripts") pod "6d930b78-d90f-4db7-b354-8fd6c13799cc" (UID: "6d930b78-d90f-4db7-b354-8fd6c13799cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:29 crc kubenswrapper[4752]: I0122 10:45:29.933054 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d930b78-d90f-4db7-b354-8fd6c13799cc-kube-api-access-dqm4j" (OuterVolumeSpecName: "kube-api-access-dqm4j") pod "6d930b78-d90f-4db7-b354-8fd6c13799cc" (UID: "6d930b78-d90f-4db7-b354-8fd6c13799cc"). InnerVolumeSpecName "kube-api-access-dqm4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:45:29 crc kubenswrapper[4752]: I0122 10:45:29.945357 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d930b78-d90f-4db7-b354-8fd6c13799cc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6d930b78-d90f-4db7-b354-8fd6c13799cc" (UID: "6d930b78-d90f-4db7-b354-8fd6c13799cc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.009618 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d930b78-d90f-4db7-b354-8fd6c13799cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d930b78-d90f-4db7-b354-8fd6c13799cc" (UID: "6d930b78-d90f-4db7-b354-8fd6c13799cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.017044 4752 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d930b78-d90f-4db7-b354-8fd6c13799cc-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.017089 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d930b78-d90f-4db7-b354-8fd6c13799cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.017105 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqm4j\" (UniqueName: \"kubernetes.io/projected/6d930b78-d90f-4db7-b354-8fd6c13799cc-kube-api-access-dqm4j\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.017119 4752 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d930b78-d90f-4db7-b354-8fd6c13799cc-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.017128 4752 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6d930b78-d90f-4db7-b354-8fd6c13799cc-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.077108 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d930b78-d90f-4db7-b354-8fd6c13799cc-config-data" (OuterVolumeSpecName: "config-data") pod "6d930b78-d90f-4db7-b354-8fd6c13799cc" (UID: "6d930b78-d90f-4db7-b354-8fd6c13799cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.119280 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d930b78-d90f-4db7-b354-8fd6c13799cc-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.162109 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7c77556c9d-7cqmw" podUID="5ba449ad-098c-4918-9403-750b0c29ee93" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.164:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.164:8443: connect: connection refused" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.274351 4752 generic.go:334] "Generic (PLEG): container finished" podID="6d930b78-d90f-4db7-b354-8fd6c13799cc" containerID="ed8dd68e69dd3c863cb64200db90146e0d36460146a2aacaccd419733aa8e462" exitCode=0 Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.274621 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.276282 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6d930b78-d90f-4db7-b354-8fd6c13799cc","Type":"ContainerDied","Data":"ed8dd68e69dd3c863cb64200db90146e0d36460146a2aacaccd419733aa8e462"} Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.276339 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6d930b78-d90f-4db7-b354-8fd6c13799cc","Type":"ContainerDied","Data":"c3ca8d330cd17439ecad8b28f42724f20cba4aaf3c1eff2f037058e7396f0351"} Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.276358 4752 scope.go:117] "RemoveContainer" containerID="560d059f913c7ff1354ef962ab43054198779c6e372e95129a6f846dc316710d" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.280436 4752 generic.go:334] "Generic (PLEG): container finished" podID="4cecada1-c407-4aff-83a3-9af1f6a94efa" containerID="d18f57d26752f629bc27360b054caa10c4aa7226f020ad68c748685e6f8cd063" exitCode=0 Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.280965 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59bf4d5494-h8d46" event={"ID":"4cecada1-c407-4aff-83a3-9af1f6a94efa","Type":"ContainerDied","Data":"d18f57d26752f629bc27360b054caa10c4aa7226f020ad68c748685e6f8cd063"} Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.364843 4752 scope.go:117] "RemoveContainer" containerID="ed8dd68e69dd3c863cb64200db90146e0d36460146a2aacaccd419733aa8e462" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.380073 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.389999 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.400556 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 10:45:30 crc kubenswrapper[4752]: E0122 10:45:30.401814 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d930b78-d90f-4db7-b354-8fd6c13799cc" containerName="probe" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.401908 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d930b78-d90f-4db7-b354-8fd6c13799cc" containerName="probe" Jan 22 10:45:30 crc kubenswrapper[4752]: E0122 10:45:30.401993 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d930b78-d90f-4db7-b354-8fd6c13799cc" containerName="cinder-scheduler" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.402316 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d930b78-d90f-4db7-b354-8fd6c13799cc" containerName="cinder-scheduler" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.402587 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d930b78-d90f-4db7-b354-8fd6c13799cc" containerName="probe" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.402799 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d930b78-d90f-4db7-b354-8fd6c13799cc" containerName="cinder-scheduler" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.404719 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.431442 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.432677 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.450532 4752 scope.go:117] "RemoveContainer" containerID="560d059f913c7ff1354ef962ab43054198779c6e372e95129a6f846dc316710d" Jan 22 10:45:30 crc kubenswrapper[4752]: E0122 10:45:30.450965 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"560d059f913c7ff1354ef962ab43054198779c6e372e95129a6f846dc316710d\": container with ID starting with 560d059f913c7ff1354ef962ab43054198779c6e372e95129a6f846dc316710d not found: ID does not exist" containerID="560d059f913c7ff1354ef962ab43054198779c6e372e95129a6f846dc316710d" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.450992 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"560d059f913c7ff1354ef962ab43054198779c6e372e95129a6f846dc316710d"} err="failed to get container status \"560d059f913c7ff1354ef962ab43054198779c6e372e95129a6f846dc316710d\": rpc error: code = NotFound desc = could not find container \"560d059f913c7ff1354ef962ab43054198779c6e372e95129a6f846dc316710d\": container with ID starting with 560d059f913c7ff1354ef962ab43054198779c6e372e95129a6f846dc316710d not found: ID does not exist" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.451010 4752 scope.go:117] "RemoveContainer" containerID="ed8dd68e69dd3c863cb64200db90146e0d36460146a2aacaccd419733aa8e462" Jan 22 10:45:30 crc kubenswrapper[4752]: E0122 10:45:30.452169 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed8dd68e69dd3c863cb64200db90146e0d36460146a2aacaccd419733aa8e462\": container with ID starting with ed8dd68e69dd3c863cb64200db90146e0d36460146a2aacaccd419733aa8e462 not found: ID does not exist" containerID="ed8dd68e69dd3c863cb64200db90146e0d36460146a2aacaccd419733aa8e462" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.452192 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed8dd68e69dd3c863cb64200db90146e0d36460146a2aacaccd419733aa8e462"} err="failed to get container status \"ed8dd68e69dd3c863cb64200db90146e0d36460146a2aacaccd419733aa8e462\": rpc error: code = NotFound desc = could not find container \"ed8dd68e69dd3c863cb64200db90146e0d36460146a2aacaccd419733aa8e462\": container with ID starting with ed8dd68e69dd3c863cb64200db90146e0d36460146a2aacaccd419733aa8e462 not found: ID does not exist" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.538555 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.538624 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs9r5\" (UniqueName: \"kubernetes.io/projected/1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d-kube-api-access-rs9r5\") pod \"cinder-scheduler-0\" (UID: \"1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.538654 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d-config-data\") pod \"cinder-scheduler-0\" (UID: \"1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.538670 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.538844 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.538971 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d-scripts\") pod \"cinder-scheduler-0\" (UID: \"1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.641587 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.641683 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs9r5\" (UniqueName: \"kubernetes.io/projected/1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d-kube-api-access-rs9r5\") pod \"cinder-scheduler-0\" (UID: \"1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.641728 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d-config-data\") pod \"cinder-scheduler-0\" (UID: \"1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.641753 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.641787 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.641824 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d-scripts\") pod \"cinder-scheduler-0\" (UID: \"1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.641747 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.647903 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d-config-data\") pod \"cinder-scheduler-0\" (UID: \"1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.647935 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.654391 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.656418 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d-scripts\") pod \"cinder-scheduler-0\" (UID: \"1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.657600 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs9r5\" (UniqueName: \"kubernetes.io/projected/1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d-kube-api-access-rs9r5\") pod \"cinder-scheduler-0\" (UID: \"1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d\") " pod="openstack/cinder-scheduler-0" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.747032 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59bf4d5494-h8d46" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.769399 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.844655 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cecada1-c407-4aff-83a3-9af1f6a94efa-combined-ca-bundle\") pod \"4cecada1-c407-4aff-83a3-9af1f6a94efa\" (UID: \"4cecada1-c407-4aff-83a3-9af1f6a94efa\") " Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.844716 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4cecada1-c407-4aff-83a3-9af1f6a94efa-config\") pod \"4cecada1-c407-4aff-83a3-9af1f6a94efa\" (UID: \"4cecada1-c407-4aff-83a3-9af1f6a94efa\") " Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.844741 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4cecada1-c407-4aff-83a3-9af1f6a94efa-httpd-config\") pod \"4cecada1-c407-4aff-83a3-9af1f6a94efa\" (UID: \"4cecada1-c407-4aff-83a3-9af1f6a94efa\") " Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.844760 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6pgz\" (UniqueName: \"kubernetes.io/projected/4cecada1-c407-4aff-83a3-9af1f6a94efa-kube-api-access-f6pgz\") pod \"4cecada1-c407-4aff-83a3-9af1f6a94efa\" (UID: \"4cecada1-c407-4aff-83a3-9af1f6a94efa\") " Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.844907 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cecada1-c407-4aff-83a3-9af1f6a94efa-ovndb-tls-certs\") pod \"4cecada1-c407-4aff-83a3-9af1f6a94efa\" (UID: \"4cecada1-c407-4aff-83a3-9af1f6a94efa\") " Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.851147 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cecada1-c407-4aff-83a3-9af1f6a94efa-kube-api-access-f6pgz" (OuterVolumeSpecName: "kube-api-access-f6pgz") pod "4cecada1-c407-4aff-83a3-9af1f6a94efa" (UID: "4cecada1-c407-4aff-83a3-9af1f6a94efa"). InnerVolumeSpecName "kube-api-access-f6pgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.851586 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cecada1-c407-4aff-83a3-9af1f6a94efa-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "4cecada1-c407-4aff-83a3-9af1f6a94efa" (UID: "4cecada1-c407-4aff-83a3-9af1f6a94efa"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.919223 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.919308 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.919970 4752 scope.go:117] "RemoveContainer" containerID="a716c6a764ce27358ef039d89ad40c367c143fe72cf879f9e4ce33e32e847757" Jan 22 10:45:30 crc kubenswrapper[4752]: E0122 10:45:30.921297 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(806e176d-686f-4523-822c-f519f6a6076d)\"" pod="openstack/watcher-decision-engine-0" podUID="806e176d-686f-4523-822c-f519f6a6076d" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.947025 4752 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4cecada1-c407-4aff-83a3-9af1f6a94efa-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.947052 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6pgz\" (UniqueName: \"kubernetes.io/projected/4cecada1-c407-4aff-83a3-9af1f6a94efa-kube-api-access-f6pgz\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:30 crc kubenswrapper[4752]: I0122 10:45:30.958590 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cecada1-c407-4aff-83a3-9af1f6a94efa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4cecada1-c407-4aff-83a3-9af1f6a94efa" (UID: "4cecada1-c407-4aff-83a3-9af1f6a94efa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:31 crc kubenswrapper[4752]: I0122 10:45:31.002326 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cecada1-c407-4aff-83a3-9af1f6a94efa-config" (OuterVolumeSpecName: "config") pod "4cecada1-c407-4aff-83a3-9af1f6a94efa" (UID: "4cecada1-c407-4aff-83a3-9af1f6a94efa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:31 crc kubenswrapper[4752]: I0122 10:45:31.007248 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cecada1-c407-4aff-83a3-9af1f6a94efa-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "4cecada1-c407-4aff-83a3-9af1f6a94efa" (UID: "4cecada1-c407-4aff-83a3-9af1f6a94efa"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:31 crc kubenswrapper[4752]: I0122 10:45:31.049161 4752 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cecada1-c407-4aff-83a3-9af1f6a94efa-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:31 crc kubenswrapper[4752]: I0122 10:45:31.049190 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cecada1-c407-4aff-83a3-9af1f6a94efa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:31 crc kubenswrapper[4752]: I0122 10:45:31.049200 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4cecada1-c407-4aff-83a3-9af1f6a94efa-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:31 crc kubenswrapper[4752]: I0122 10:45:31.149775 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d930b78-d90f-4db7-b354-8fd6c13799cc" path="/var/lib/kubelet/pods/6d930b78-d90f-4db7-b354-8fd6c13799cc/volumes" Jan 22 10:45:31 crc kubenswrapper[4752]: I0122 10:45:31.305748 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59bf4d5494-h8d46" event={"ID":"4cecada1-c407-4aff-83a3-9af1f6a94efa","Type":"ContainerDied","Data":"0b5521fde68d2ae06d96c8198e16de38033ba27ecfb4f1ef9dc20f7ead048d4d"} Jan 22 10:45:31 crc kubenswrapper[4752]: I0122 10:45:31.305875 4752 scope.go:117] "RemoveContainer" containerID="4fad7817e4352e2c73685b768f354419e94b86c1af96f13854a329ab0fb5f154" Jan 22 10:45:31 crc kubenswrapper[4752]: I0122 10:45:31.306203 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59bf4d5494-h8d46" Jan 22 10:45:31 crc kubenswrapper[4752]: I0122 10:45:31.308041 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 10:45:31 crc kubenswrapper[4752]: I0122 10:45:31.320941 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="fd370da5-83df-42ba-a822-7cff763d174b" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 22 10:45:31 crc kubenswrapper[4752]: I0122 10:45:31.467935 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-59bf4d5494-h8d46"] Jan 22 10:45:31 crc kubenswrapper[4752]: I0122 10:45:31.468768 4752 scope.go:117] "RemoveContainer" containerID="d18f57d26752f629bc27360b054caa10c4aa7226f020ad68c748685e6f8cd063" Jan 22 10:45:31 crc kubenswrapper[4752]: I0122 10:45:31.479559 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-59bf4d5494-h8d46"] Jan 22 10:45:32 crc kubenswrapper[4752]: I0122 10:45:32.327095 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d","Type":"ContainerStarted","Data":"bae583000b07fc55e9cdeac43b0a0c66e51dadf7976ac009e6ba49ecc392cdd0"} Jan 22 10:45:32 crc kubenswrapper[4752]: I0122 10:45:32.327408 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d","Type":"ContainerStarted","Data":"1fcc528bb91d065c45ca8b0d8512c60f1c025c9f990f1f2d38efd1bcfc41c19d"} Jan 22 10:45:33 crc kubenswrapper[4752]: I0122 10:45:33.108892 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cecada1-c407-4aff-83a3-9af1f6a94efa" path="/var/lib/kubelet/pods/4cecada1-c407-4aff-83a3-9af1f6a94efa/volumes" Jan 22 10:45:33 crc kubenswrapper[4752]: I0122 10:45:33.340915 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1f8a72ec-ef74-4e8e-bc25-b5e488d0ad3d","Type":"ContainerStarted","Data":"6049f1d0901eac517c2566b3a16fc9cd545c81bbebead811fd1f0bb21b9d2dfd"} Jan 22 10:45:33 crc kubenswrapper[4752]: I0122 10:45:33.369763 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.369726494 podStartE2EDuration="3.369726494s" podCreationTimestamp="2026-01-22 10:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:45:33.358266892 +0000 UTC m=+1212.588209800" watchObservedRunningTime="2026-01-22 10:45:33.369726494 +0000 UTC m=+1212.599669402" Jan 22 10:45:35 crc kubenswrapper[4752]: I0122 10:45:35.768739 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.130595 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.131057 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="835169c7-e182-4674-adc7-18ef50e6a906" containerName="glance-log" containerID="cri-o://b6a0ee8bc26c0c8f5f3d24e24a3ac8ab9adaaf385657ad0f4a59346291749cd3" gracePeriod=30 Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.131162 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="835169c7-e182-4674-adc7-18ef50e6a906" containerName="glance-httpd" containerID="cri-o://06d020445b695ddfa8171298ddf6d2f583fc92ca26e0f0200d92bf69bad86886" gracePeriod=30 Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.265960 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6b4f844d5-k72rp"] Jan 22 10:45:37 crc kubenswrapper[4752]: E0122 10:45:37.266546 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cecada1-c407-4aff-83a3-9af1f6a94efa" containerName="neutron-httpd" Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.266562 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cecada1-c407-4aff-83a3-9af1f6a94efa" containerName="neutron-httpd" Jan 22 10:45:37 crc kubenswrapper[4752]: E0122 10:45:37.266579 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cecada1-c407-4aff-83a3-9af1f6a94efa" containerName="neutron-api" Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.266586 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cecada1-c407-4aff-83a3-9af1f6a94efa" containerName="neutron-api" Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.266753 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cecada1-c407-4aff-83a3-9af1f6a94efa" containerName="neutron-api" Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.266772 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cecada1-c407-4aff-83a3-9af1f6a94efa" containerName="neutron-httpd" Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.267708 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6b4f844d5-k72rp" Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.272674 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.274440 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.274892 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.288538 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6b4f844d5-k72rp"] Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.414290 4752 generic.go:334] "Generic (PLEG): container finished" podID="835169c7-e182-4674-adc7-18ef50e6a906" containerID="b6a0ee8bc26c0c8f5f3d24e24a3ac8ab9adaaf385657ad0f4a59346291749cd3" exitCode=143 Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.414343 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"835169c7-e182-4674-adc7-18ef50e6a906","Type":"ContainerDied","Data":"b6a0ee8bc26c0c8f5f3d24e24a3ac8ab9adaaf385657ad0f4a59346291749cd3"} Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.414622 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a18ac410-73d6-4aa5-b7d5-1afb991bcbc1-config-data\") pod \"swift-proxy-6b4f844d5-k72rp\" (UID: \"a18ac410-73d6-4aa5-b7d5-1afb991bcbc1\") " pod="openstack/swift-proxy-6b4f844d5-k72rp" Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.414689 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a18ac410-73d6-4aa5-b7d5-1afb991bcbc1-public-tls-certs\") pod \"swift-proxy-6b4f844d5-k72rp\" (UID: \"a18ac410-73d6-4aa5-b7d5-1afb991bcbc1\") " pod="openstack/swift-proxy-6b4f844d5-k72rp" Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.414789 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a18ac410-73d6-4aa5-b7d5-1afb991bcbc1-combined-ca-bundle\") pod \"swift-proxy-6b4f844d5-k72rp\" (UID: \"a18ac410-73d6-4aa5-b7d5-1afb991bcbc1\") " pod="openstack/swift-proxy-6b4f844d5-k72rp" Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.414897 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a18ac410-73d6-4aa5-b7d5-1afb991bcbc1-run-httpd\") pod \"swift-proxy-6b4f844d5-k72rp\" (UID: \"a18ac410-73d6-4aa5-b7d5-1afb991bcbc1\") " pod="openstack/swift-proxy-6b4f844d5-k72rp" Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.414919 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a18ac410-73d6-4aa5-b7d5-1afb991bcbc1-log-httpd\") pod \"swift-proxy-6b4f844d5-k72rp\" (UID: \"a18ac410-73d6-4aa5-b7d5-1afb991bcbc1\") " pod="openstack/swift-proxy-6b4f844d5-k72rp" Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.414967 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a18ac410-73d6-4aa5-b7d5-1afb991bcbc1-etc-swift\") pod \"swift-proxy-6b4f844d5-k72rp\" (UID: \"a18ac410-73d6-4aa5-b7d5-1afb991bcbc1\") " pod="openstack/swift-proxy-6b4f844d5-k72rp" Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.415019 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a18ac410-73d6-4aa5-b7d5-1afb991bcbc1-internal-tls-certs\") pod \"swift-proxy-6b4f844d5-k72rp\" (UID: \"a18ac410-73d6-4aa5-b7d5-1afb991bcbc1\") " pod="openstack/swift-proxy-6b4f844d5-k72rp" Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.415048 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zbxc\" (UniqueName: \"kubernetes.io/projected/a18ac410-73d6-4aa5-b7d5-1afb991bcbc1-kube-api-access-6zbxc\") pod \"swift-proxy-6b4f844d5-k72rp\" (UID: \"a18ac410-73d6-4aa5-b7d5-1afb991bcbc1\") " pod="openstack/swift-proxy-6b4f844d5-k72rp" Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.517018 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a18ac410-73d6-4aa5-b7d5-1afb991bcbc1-run-httpd\") pod \"swift-proxy-6b4f844d5-k72rp\" (UID: \"a18ac410-73d6-4aa5-b7d5-1afb991bcbc1\") " pod="openstack/swift-proxy-6b4f844d5-k72rp" Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.517073 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a18ac410-73d6-4aa5-b7d5-1afb991bcbc1-log-httpd\") pod \"swift-proxy-6b4f844d5-k72rp\" (UID: \"a18ac410-73d6-4aa5-b7d5-1afb991bcbc1\") " pod="openstack/swift-proxy-6b4f844d5-k72rp" Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.517117 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a18ac410-73d6-4aa5-b7d5-1afb991bcbc1-etc-swift\") pod \"swift-proxy-6b4f844d5-k72rp\" (UID: \"a18ac410-73d6-4aa5-b7d5-1afb991bcbc1\") " pod="openstack/swift-proxy-6b4f844d5-k72rp" Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.517177 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a18ac410-73d6-4aa5-b7d5-1afb991bcbc1-internal-tls-certs\") pod \"swift-proxy-6b4f844d5-k72rp\" (UID: \"a18ac410-73d6-4aa5-b7d5-1afb991bcbc1\") " pod="openstack/swift-proxy-6b4f844d5-k72rp" Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.517202 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zbxc\" (UniqueName: \"kubernetes.io/projected/a18ac410-73d6-4aa5-b7d5-1afb991bcbc1-kube-api-access-6zbxc\") pod \"swift-proxy-6b4f844d5-k72rp\" (UID: \"a18ac410-73d6-4aa5-b7d5-1afb991bcbc1\") " pod="openstack/swift-proxy-6b4f844d5-k72rp" Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.517227 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a18ac410-73d6-4aa5-b7d5-1afb991bcbc1-config-data\") pod \"swift-proxy-6b4f844d5-k72rp\" (UID: \"a18ac410-73d6-4aa5-b7d5-1afb991bcbc1\") " pod="openstack/swift-proxy-6b4f844d5-k72rp" Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.517250 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a18ac410-73d6-4aa5-b7d5-1afb991bcbc1-public-tls-certs\") pod \"swift-proxy-6b4f844d5-k72rp\" (UID: \"a18ac410-73d6-4aa5-b7d5-1afb991bcbc1\") " pod="openstack/swift-proxy-6b4f844d5-k72rp" Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.517672 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a18ac410-73d6-4aa5-b7d5-1afb991bcbc1-combined-ca-bundle\") pod \"swift-proxy-6b4f844d5-k72rp\" (UID: \"a18ac410-73d6-4aa5-b7d5-1afb991bcbc1\") " pod="openstack/swift-proxy-6b4f844d5-k72rp" Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.526175 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a18ac410-73d6-4aa5-b7d5-1afb991bcbc1-log-httpd\") pod \"swift-proxy-6b4f844d5-k72rp\" (UID: \"a18ac410-73d6-4aa5-b7d5-1afb991bcbc1\") " pod="openstack/swift-proxy-6b4f844d5-k72rp" Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.527769 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a18ac410-73d6-4aa5-b7d5-1afb991bcbc1-etc-swift\") pod \"swift-proxy-6b4f844d5-k72rp\" (UID: \"a18ac410-73d6-4aa5-b7d5-1afb991bcbc1\") " pod="openstack/swift-proxy-6b4f844d5-k72rp" Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.528832 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a18ac410-73d6-4aa5-b7d5-1afb991bcbc1-run-httpd\") pod \"swift-proxy-6b4f844d5-k72rp\" (UID: \"a18ac410-73d6-4aa5-b7d5-1afb991bcbc1\") " pod="openstack/swift-proxy-6b4f844d5-k72rp" Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.545226 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a18ac410-73d6-4aa5-b7d5-1afb991bcbc1-internal-tls-certs\") pod \"swift-proxy-6b4f844d5-k72rp\" (UID: \"a18ac410-73d6-4aa5-b7d5-1afb991bcbc1\") " pod="openstack/swift-proxy-6b4f844d5-k72rp" Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.545716 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a18ac410-73d6-4aa5-b7d5-1afb991bcbc1-config-data\") pod \"swift-proxy-6b4f844d5-k72rp\" (UID: \"a18ac410-73d6-4aa5-b7d5-1afb991bcbc1\") " pod="openstack/swift-proxy-6b4f844d5-k72rp" Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.549878 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zbxc\" (UniqueName: \"kubernetes.io/projected/a18ac410-73d6-4aa5-b7d5-1afb991bcbc1-kube-api-access-6zbxc\") pod \"swift-proxy-6b4f844d5-k72rp\" (UID: \"a18ac410-73d6-4aa5-b7d5-1afb991bcbc1\") " pod="openstack/swift-proxy-6b4f844d5-k72rp" Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.550651 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a18ac410-73d6-4aa5-b7d5-1afb991bcbc1-combined-ca-bundle\") pod \"swift-proxy-6b4f844d5-k72rp\" (UID: \"a18ac410-73d6-4aa5-b7d5-1afb991bcbc1\") " pod="openstack/swift-proxy-6b4f844d5-k72rp" Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.557142 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a18ac410-73d6-4aa5-b7d5-1afb991bcbc1-public-tls-certs\") pod \"swift-proxy-6b4f844d5-k72rp\" (UID: \"a18ac410-73d6-4aa5-b7d5-1afb991bcbc1\") " pod="openstack/swift-proxy-6b4f844d5-k72rp" Jan 22 10:45:37 crc kubenswrapper[4752]: I0122 10:45:37.587273 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6b4f844d5-k72rp" Jan 22 10:45:38 crc kubenswrapper[4752]: I0122 10:45:38.314486 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 10:45:38 crc kubenswrapper[4752]: I0122 10:45:38.315144 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3b42ddad-62b5-482e-b7e4-015f4e138979" containerName="glance-log" containerID="cri-o://df6b42e03466da56b943ece32e867a0f32b63e3bea2146c72b25a466fca70cd0" gracePeriod=30 Jan 22 10:45:38 crc kubenswrapper[4752]: I0122 10:45:38.315653 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3b42ddad-62b5-482e-b7e4-015f4e138979" containerName="glance-httpd" containerID="cri-o://9f5d0bd9c13f4f365c3572dede35d8b13ecb1864df5491c7af8d2cd90149180a" gracePeriod=30 Jan 22 10:45:38 crc kubenswrapper[4752]: I0122 10:45:38.446661 4752 generic.go:334] "Generic (PLEG): container finished" podID="835169c7-e182-4674-adc7-18ef50e6a906" containerID="06d020445b695ddfa8171298ddf6d2f583fc92ca26e0f0200d92bf69bad86886" exitCode=0 Jan 22 10:45:38 crc kubenswrapper[4752]: I0122 10:45:38.446754 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"835169c7-e182-4674-adc7-18ef50e6a906","Type":"ContainerDied","Data":"06d020445b695ddfa8171298ddf6d2f583fc92ca26e0f0200d92bf69bad86886"} Jan 22 10:45:39 crc kubenswrapper[4752]: I0122 10:45:39.469619 4752 generic.go:334] "Generic (PLEG): container finished" podID="3b42ddad-62b5-482e-b7e4-015f4e138979" containerID="9f5d0bd9c13f4f365c3572dede35d8b13ecb1864df5491c7af8d2cd90149180a" exitCode=0 Jan 22 10:45:39 crc kubenswrapper[4752]: I0122 10:45:39.469648 4752 generic.go:334] "Generic (PLEG): container finished" podID="3b42ddad-62b5-482e-b7e4-015f4e138979" containerID="df6b42e03466da56b943ece32e867a0f32b63e3bea2146c72b25a466fca70cd0" exitCode=143 Jan 22 10:45:39 crc kubenswrapper[4752]: I0122 10:45:39.469667 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3b42ddad-62b5-482e-b7e4-015f4e138979","Type":"ContainerDied","Data":"9f5d0bd9c13f4f365c3572dede35d8b13ecb1864df5491c7af8d2cd90149180a"} Jan 22 10:45:39 crc kubenswrapper[4752]: I0122 10:45:39.469711 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3b42ddad-62b5-482e-b7e4-015f4e138979","Type":"ContainerDied","Data":"df6b42e03466da56b943ece32e867a0f32b63e3bea2146c72b25a466fca70cd0"} Jan 22 10:45:40 crc kubenswrapper[4752]: I0122 10:45:40.164138 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7c77556c9d-7cqmw" podUID="5ba449ad-098c-4918-9403-750b0c29ee93" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.164:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.164:8443: connect: connection refused" Jan 22 10:45:40 crc kubenswrapper[4752]: I0122 10:45:40.164471 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7c77556c9d-7cqmw" Jan 22 10:45:40 crc kubenswrapper[4752]: I0122 10:45:40.934737 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 22 10:45:41 crc kubenswrapper[4752]: I0122 10:45:41.497893 4752 generic.go:334] "Generic (PLEG): container finished" podID="fd370da5-83df-42ba-a822-7cff763d174b" containerID="4d303f0ee7f28a29245b9c515d9656433fb564e5975e66e1df82bdddf2dad8f8" exitCode=137 Jan 22 10:45:41 crc kubenswrapper[4752]: I0122 10:45:41.498348 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd370da5-83df-42ba-a822-7cff763d174b","Type":"ContainerDied","Data":"4d303f0ee7f28a29245b9c515d9656433fb564e5975e66e1df82bdddf2dad8f8"} Jan 22 10:45:41 crc kubenswrapper[4752]: I0122 10:45:41.775959 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:45:41 crc kubenswrapper[4752]: I0122 10:45:41.843638 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tqn6\" (UniqueName: \"kubernetes.io/projected/fd370da5-83df-42ba-a822-7cff763d174b-kube-api-access-4tqn6\") pod \"fd370da5-83df-42ba-a822-7cff763d174b\" (UID: \"fd370da5-83df-42ba-a822-7cff763d174b\") " Jan 22 10:45:41 crc kubenswrapper[4752]: I0122 10:45:41.844334 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd370da5-83df-42ba-a822-7cff763d174b-sg-core-conf-yaml\") pod \"fd370da5-83df-42ba-a822-7cff763d174b\" (UID: \"fd370da5-83df-42ba-a822-7cff763d174b\") " Jan 22 10:45:41 crc kubenswrapper[4752]: I0122 10:45:41.844440 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd370da5-83df-42ba-a822-7cff763d174b-run-httpd\") pod \"fd370da5-83df-42ba-a822-7cff763d174b\" (UID: \"fd370da5-83df-42ba-a822-7cff763d174b\") " Jan 22 10:45:41 crc kubenswrapper[4752]: I0122 10:45:41.844536 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd370da5-83df-42ba-a822-7cff763d174b-log-httpd\") pod \"fd370da5-83df-42ba-a822-7cff763d174b\" (UID: \"fd370da5-83df-42ba-a822-7cff763d174b\") " Jan 22 10:45:41 crc kubenswrapper[4752]: I0122 10:45:41.844584 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd370da5-83df-42ba-a822-7cff763d174b-config-data\") pod \"fd370da5-83df-42ba-a822-7cff763d174b\" (UID: \"fd370da5-83df-42ba-a822-7cff763d174b\") " Jan 22 10:45:41 crc kubenswrapper[4752]: I0122 10:45:41.844608 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd370da5-83df-42ba-a822-7cff763d174b-scripts\") pod \"fd370da5-83df-42ba-a822-7cff763d174b\" (UID: \"fd370da5-83df-42ba-a822-7cff763d174b\") " Jan 22 10:45:41 crc kubenswrapper[4752]: I0122 10:45:41.844644 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd370da5-83df-42ba-a822-7cff763d174b-combined-ca-bundle\") pod \"fd370da5-83df-42ba-a822-7cff763d174b\" (UID: \"fd370da5-83df-42ba-a822-7cff763d174b\") " Jan 22 10:45:41 crc kubenswrapper[4752]: I0122 10:45:41.845562 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd370da5-83df-42ba-a822-7cff763d174b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fd370da5-83df-42ba-a822-7cff763d174b" (UID: "fd370da5-83df-42ba-a822-7cff763d174b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:45:41 crc kubenswrapper[4752]: I0122 10:45:41.846408 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd370da5-83df-42ba-a822-7cff763d174b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fd370da5-83df-42ba-a822-7cff763d174b" (UID: "fd370da5-83df-42ba-a822-7cff763d174b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:45:41 crc kubenswrapper[4752]: I0122 10:45:41.851431 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd370da5-83df-42ba-a822-7cff763d174b-kube-api-access-4tqn6" (OuterVolumeSpecName: "kube-api-access-4tqn6") pod "fd370da5-83df-42ba-a822-7cff763d174b" (UID: "fd370da5-83df-42ba-a822-7cff763d174b"). InnerVolumeSpecName "kube-api-access-4tqn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:45:41 crc kubenswrapper[4752]: I0122 10:45:41.851774 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd370da5-83df-42ba-a822-7cff763d174b-scripts" (OuterVolumeSpecName: "scripts") pod "fd370da5-83df-42ba-a822-7cff763d174b" (UID: "fd370da5-83df-42ba-a822-7cff763d174b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:41 crc kubenswrapper[4752]: I0122 10:45:41.894984 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd370da5-83df-42ba-a822-7cff763d174b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd370da5-83df-42ba-a822-7cff763d174b" (UID: "fd370da5-83df-42ba-a822-7cff763d174b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:41 crc kubenswrapper[4752]: I0122 10:45:41.949643 4752 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd370da5-83df-42ba-a822-7cff763d174b-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:41 crc kubenswrapper[4752]: I0122 10:45:41.949716 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd370da5-83df-42ba-a822-7cff763d174b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:41 crc kubenswrapper[4752]: I0122 10:45:41.949735 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tqn6\" (UniqueName: \"kubernetes.io/projected/fd370da5-83df-42ba-a822-7cff763d174b-kube-api-access-4tqn6\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:41 crc kubenswrapper[4752]: I0122 10:45:41.949747 4752 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd370da5-83df-42ba-a822-7cff763d174b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:41 crc kubenswrapper[4752]: I0122 10:45:41.949763 4752 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd370da5-83df-42ba-a822-7cff763d174b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:41 crc kubenswrapper[4752]: I0122 10:45:41.950298 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd370da5-83df-42ba-a822-7cff763d174b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fd370da5-83df-42ba-a822-7cff763d174b" (UID: "fd370da5-83df-42ba-a822-7cff763d174b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:41 crc kubenswrapper[4752]: I0122 10:45:41.963846 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd370da5-83df-42ba-a822-7cff763d174b-config-data" (OuterVolumeSpecName: "config-data") pod "fd370da5-83df-42ba-a822-7cff763d174b" (UID: "fd370da5-83df-42ba-a822-7cff763d174b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.032558 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.053933 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"3b42ddad-62b5-482e-b7e4-015f4e138979\" (UID: \"3b42ddad-62b5-482e-b7e4-015f4e138979\") " Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.057012 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b42ddad-62b5-482e-b7e4-015f4e138979-logs\") pod \"3b42ddad-62b5-482e-b7e4-015f4e138979\" (UID: \"3b42ddad-62b5-482e-b7e4-015f4e138979\") " Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.057124 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b42ddad-62b5-482e-b7e4-015f4e138979-config-data\") pod \"3b42ddad-62b5-482e-b7e4-015f4e138979\" (UID: \"3b42ddad-62b5-482e-b7e4-015f4e138979\") " Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.057181 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48gmw\" (UniqueName: \"kubernetes.io/projected/3b42ddad-62b5-482e-b7e4-015f4e138979-kube-api-access-48gmw\") pod \"3b42ddad-62b5-482e-b7e4-015f4e138979\" (UID: \"3b42ddad-62b5-482e-b7e4-015f4e138979\") " Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.057239 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b42ddad-62b5-482e-b7e4-015f4e138979-internal-tls-certs\") pod \"3b42ddad-62b5-482e-b7e4-015f4e138979\" (UID: \"3b42ddad-62b5-482e-b7e4-015f4e138979\") " Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.058192 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b42ddad-62b5-482e-b7e4-015f4e138979-scripts\") pod \"3b42ddad-62b5-482e-b7e4-015f4e138979\" (UID: \"3b42ddad-62b5-482e-b7e4-015f4e138979\") " Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.058279 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3b42ddad-62b5-482e-b7e4-015f4e138979-httpd-run\") pod \"3b42ddad-62b5-482e-b7e4-015f4e138979\" (UID: \"3b42ddad-62b5-482e-b7e4-015f4e138979\") " Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.058342 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b42ddad-62b5-482e-b7e4-015f4e138979-combined-ca-bundle\") pod \"3b42ddad-62b5-482e-b7e4-015f4e138979\" (UID: \"3b42ddad-62b5-482e-b7e4-015f4e138979\") " Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.059820 4752 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd370da5-83df-42ba-a822-7cff763d174b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.059846 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd370da5-83df-42ba-a822-7cff763d174b-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.064338 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b42ddad-62b5-482e-b7e4-015f4e138979-logs" (OuterVolumeSpecName: "logs") pod "3b42ddad-62b5-482e-b7e4-015f4e138979" (UID: "3b42ddad-62b5-482e-b7e4-015f4e138979"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.081274 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b42ddad-62b5-482e-b7e4-015f4e138979-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3b42ddad-62b5-482e-b7e4-015f4e138979" (UID: "3b42ddad-62b5-482e-b7e4-015f4e138979"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.109401 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b42ddad-62b5-482e-b7e4-015f4e138979-scripts" (OuterVolumeSpecName: "scripts") pod "3b42ddad-62b5-482e-b7e4-015f4e138979" (UID: "3b42ddad-62b5-482e-b7e4-015f4e138979"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.109533 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "3b42ddad-62b5-482e-b7e4-015f4e138979" (UID: "3b42ddad-62b5-482e-b7e4-015f4e138979"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.120122 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b42ddad-62b5-482e-b7e4-015f4e138979-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b42ddad-62b5-482e-b7e4-015f4e138979" (UID: "3b42ddad-62b5-482e-b7e4-015f4e138979"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.120808 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b42ddad-62b5-482e-b7e4-015f4e138979-kube-api-access-48gmw" (OuterVolumeSpecName: "kube-api-access-48gmw") pod "3b42ddad-62b5-482e-b7e4-015f4e138979" (UID: "3b42ddad-62b5-482e-b7e4-015f4e138979"). InnerVolumeSpecName "kube-api-access-48gmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.162408 4752 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b42ddad-62b5-482e-b7e4-015f4e138979-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.163087 4752 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3b42ddad-62b5-482e-b7e4-015f4e138979-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.163170 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b42ddad-62b5-482e-b7e4-015f4e138979-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.163251 4752 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.163498 4752 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b42ddad-62b5-482e-b7e4-015f4e138979-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.163793 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48gmw\" (UniqueName: \"kubernetes.io/projected/3b42ddad-62b5-482e-b7e4-015f4e138979-kube-api-access-48gmw\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.168404 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.190542 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b42ddad-62b5-482e-b7e4-015f4e138979-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3b42ddad-62b5-482e-b7e4-015f4e138979" (UID: "3b42ddad-62b5-482e-b7e4-015f4e138979"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.195700 4752 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.261744 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b42ddad-62b5-482e-b7e4-015f4e138979-config-data" (OuterVolumeSpecName: "config-data") pod "3b42ddad-62b5-482e-b7e4-015f4e138979" (UID: "3b42ddad-62b5-482e-b7e4-015f4e138979"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.265842 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/835169c7-e182-4674-adc7-18ef50e6a906-httpd-run\") pod \"835169c7-e182-4674-adc7-18ef50e6a906\" (UID: \"835169c7-e182-4674-adc7-18ef50e6a906\") " Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.266064 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/835169c7-e182-4674-adc7-18ef50e6a906-config-data\") pod \"835169c7-e182-4674-adc7-18ef50e6a906\" (UID: \"835169c7-e182-4674-adc7-18ef50e6a906\") " Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.266203 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/835169c7-e182-4674-adc7-18ef50e6a906-combined-ca-bundle\") pod \"835169c7-e182-4674-adc7-18ef50e6a906\" (UID: \"835169c7-e182-4674-adc7-18ef50e6a906\") " Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.266484 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/835169c7-e182-4674-adc7-18ef50e6a906-scripts\") pod \"835169c7-e182-4674-adc7-18ef50e6a906\" (UID: \"835169c7-e182-4674-adc7-18ef50e6a906\") " Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.266616 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"835169c7-e182-4674-adc7-18ef50e6a906\" (UID: \"835169c7-e182-4674-adc7-18ef50e6a906\") " Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.266719 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/835169c7-e182-4674-adc7-18ef50e6a906-public-tls-certs\") pod \"835169c7-e182-4674-adc7-18ef50e6a906\" (UID: \"835169c7-e182-4674-adc7-18ef50e6a906\") " Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.266844 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/835169c7-e182-4674-adc7-18ef50e6a906-logs\") pod \"835169c7-e182-4674-adc7-18ef50e6a906\" (UID: \"835169c7-e182-4674-adc7-18ef50e6a906\") " Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.266976 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jglk\" (UniqueName: \"kubernetes.io/projected/835169c7-e182-4674-adc7-18ef50e6a906-kube-api-access-9jglk\") pod \"835169c7-e182-4674-adc7-18ef50e6a906\" (UID: \"835169c7-e182-4674-adc7-18ef50e6a906\") " Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.268388 4752 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.268423 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b42ddad-62b5-482e-b7e4-015f4e138979-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.268435 4752 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b42ddad-62b5-482e-b7e4-015f4e138979-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.270320 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/835169c7-e182-4674-adc7-18ef50e6a906-logs" (OuterVolumeSpecName: "logs") pod "835169c7-e182-4674-adc7-18ef50e6a906" (UID: "835169c7-e182-4674-adc7-18ef50e6a906"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.270705 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "835169c7-e182-4674-adc7-18ef50e6a906" (UID: "835169c7-e182-4674-adc7-18ef50e6a906"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.270735 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/835169c7-e182-4674-adc7-18ef50e6a906-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "835169c7-e182-4674-adc7-18ef50e6a906" (UID: "835169c7-e182-4674-adc7-18ef50e6a906"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.272066 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/835169c7-e182-4674-adc7-18ef50e6a906-kube-api-access-9jglk" (OuterVolumeSpecName: "kube-api-access-9jglk") pod "835169c7-e182-4674-adc7-18ef50e6a906" (UID: "835169c7-e182-4674-adc7-18ef50e6a906"). InnerVolumeSpecName "kube-api-access-9jglk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.279278 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/835169c7-e182-4674-adc7-18ef50e6a906-scripts" (OuterVolumeSpecName: "scripts") pod "835169c7-e182-4674-adc7-18ef50e6a906" (UID: "835169c7-e182-4674-adc7-18ef50e6a906"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.298841 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/835169c7-e182-4674-adc7-18ef50e6a906-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "835169c7-e182-4674-adc7-18ef50e6a906" (UID: "835169c7-e182-4674-adc7-18ef50e6a906"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.327142 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/835169c7-e182-4674-adc7-18ef50e6a906-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "835169c7-e182-4674-adc7-18ef50e6a906" (UID: "835169c7-e182-4674-adc7-18ef50e6a906"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.353245 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6b4f844d5-k72rp"] Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.362518 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/835169c7-e182-4674-adc7-18ef50e6a906-config-data" (OuterVolumeSpecName: "config-data") pod "835169c7-e182-4674-adc7-18ef50e6a906" (UID: "835169c7-e182-4674-adc7-18ef50e6a906"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.369726 4752 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/835169c7-e182-4674-adc7-18ef50e6a906-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.369776 4752 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.369787 4752 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/835169c7-e182-4674-adc7-18ef50e6a906-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.369798 4752 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/835169c7-e182-4674-adc7-18ef50e6a906-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.369808 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jglk\" (UniqueName: \"kubernetes.io/projected/835169c7-e182-4674-adc7-18ef50e6a906-kube-api-access-9jglk\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.369816 4752 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/835169c7-e182-4674-adc7-18ef50e6a906-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.369826 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/835169c7-e182-4674-adc7-18ef50e6a906-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.369835 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/835169c7-e182-4674-adc7-18ef50e6a906-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.391352 4752 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.475728 4752 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.546441 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"835169c7-e182-4674-adc7-18ef50e6a906","Type":"ContainerDied","Data":"8bb66450a95cf59f9cdb359d23be5c1239b6ed0d61325f2a629a695c986baf1b"} Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.546550 4752 scope.go:117] "RemoveContainer" containerID="06d020445b695ddfa8171298ddf6d2f583fc92ca26e0f0200d92bf69bad86886" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.546822 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.595341 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3b42ddad-62b5-482e-b7e4-015f4e138979","Type":"ContainerDied","Data":"38c930f5f292cb81617e1cab3271f8756ead170e88ed87ee41257189a0fe6115"} Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.595441 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.632502 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6b4f844d5-k72rp" event={"ID":"a18ac410-73d6-4aa5-b7d5-1afb991bcbc1","Type":"ContainerStarted","Data":"253b549c802518d20f50cb7874992c6b73fc3fac6df5273d8313dbe79488573a"} Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.649101 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"245b2ea0-58e6-4729-b41a-2816e37e0e2d","Type":"ContainerStarted","Data":"d5bc2d3d243ccd902d01b0f9083d785f6eb96f5c2eb128c2350ee526394ef73d"} Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.652300 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd370da5-83df-42ba-a822-7cff763d174b","Type":"ContainerDied","Data":"89d8c316067b7a9527c1245cd1c3d770efa28ee4d1984508ebcd6ea8551f0f86"} Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.652991 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.678434 4752 scope.go:117] "RemoveContainer" containerID="b6a0ee8bc26c0c8f5f3d24e24a3ac8ab9adaaf385657ad0f4a59346291749cd3" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.695980 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.722357 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.738386 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 10:45:42 crc kubenswrapper[4752]: E0122 10:45:42.738813 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="835169c7-e182-4674-adc7-18ef50e6a906" containerName="glance-httpd" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.738831 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="835169c7-e182-4674-adc7-18ef50e6a906" containerName="glance-httpd" Jan 22 10:45:42 crc kubenswrapper[4752]: E0122 10:45:42.738849 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="835169c7-e182-4674-adc7-18ef50e6a906" containerName="glance-log" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.738871 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="835169c7-e182-4674-adc7-18ef50e6a906" containerName="glance-log" Jan 22 10:45:42 crc kubenswrapper[4752]: E0122 10:45:42.738884 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b42ddad-62b5-482e-b7e4-015f4e138979" containerName="glance-httpd" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.738890 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b42ddad-62b5-482e-b7e4-015f4e138979" containerName="glance-httpd" Jan 22 10:45:42 crc kubenswrapper[4752]: E0122 10:45:42.738910 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd370da5-83df-42ba-a822-7cff763d174b" containerName="sg-core" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.738915 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd370da5-83df-42ba-a822-7cff763d174b" containerName="sg-core" Jan 22 10:45:42 crc kubenswrapper[4752]: E0122 10:45:42.738930 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b42ddad-62b5-482e-b7e4-015f4e138979" containerName="glance-log" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.738936 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b42ddad-62b5-482e-b7e4-015f4e138979" containerName="glance-log" Jan 22 10:45:42 crc kubenswrapper[4752]: E0122 10:45:42.738950 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd370da5-83df-42ba-a822-7cff763d174b" containerName="proxy-httpd" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.738956 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd370da5-83df-42ba-a822-7cff763d174b" containerName="proxy-httpd" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.739137 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd370da5-83df-42ba-a822-7cff763d174b" containerName="sg-core" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.739150 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="835169c7-e182-4674-adc7-18ef50e6a906" containerName="glance-log" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.739158 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b42ddad-62b5-482e-b7e4-015f4e138979" containerName="glance-log" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.739169 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b42ddad-62b5-482e-b7e4-015f4e138979" containerName="glance-httpd" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.739181 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd370da5-83df-42ba-a822-7cff763d174b" containerName="proxy-httpd" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.739189 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="835169c7-e182-4674-adc7-18ef50e6a906" containerName="glance-httpd" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.740185 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.749975 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.755544 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.755709 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-7npjg" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.755939 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.777384 4752 scope.go:117] "RemoveContainer" containerID="9f5d0bd9c13f4f365c3572dede35d8b13ecb1864df5491c7af8d2cd90149180a" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.784650 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.788786 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fb0ef95-0c63-4187-8c79-0c487fefa04e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4fb0ef95-0c63-4187-8c79-0c487fefa04e\") " pod="openstack/glance-default-external-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.788835 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4b7t\" (UniqueName: \"kubernetes.io/projected/4fb0ef95-0c63-4187-8c79-0c487fefa04e-kube-api-access-t4b7t\") pod \"glance-default-external-api-0\" (UID: \"4fb0ef95-0c63-4187-8c79-0c487fefa04e\") " pod="openstack/glance-default-external-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.788888 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fb0ef95-0c63-4187-8c79-0c487fefa04e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4fb0ef95-0c63-4187-8c79-0c487fefa04e\") " pod="openstack/glance-default-external-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.788904 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fb0ef95-0c63-4187-8c79-0c487fefa04e-config-data\") pod \"glance-default-external-api-0\" (UID: \"4fb0ef95-0c63-4187-8c79-0c487fefa04e\") " pod="openstack/glance-default-external-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.789084 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4fb0ef95-0c63-4187-8c79-0c487fefa04e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4fb0ef95-0c63-4187-8c79-0c487fefa04e\") " pod="openstack/glance-default-external-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.789202 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"4fb0ef95-0c63-4187-8c79-0c487fefa04e\") " pod="openstack/glance-default-external-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.789344 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fb0ef95-0c63-4187-8c79-0c487fefa04e-scripts\") pod \"glance-default-external-api-0\" (UID: \"4fb0ef95-0c63-4187-8c79-0c487fefa04e\") " pod="openstack/glance-default-external-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.789469 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fb0ef95-0c63-4187-8c79-0c487fefa04e-logs\") pod \"glance-default-external-api-0\" (UID: \"4fb0ef95-0c63-4187-8c79-0c487fefa04e\") " pod="openstack/glance-default-external-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.805953 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.830377 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.846464 4752 scope.go:117] "RemoveContainer" containerID="df6b42e03466da56b943ece32e867a0f32b63e3bea2146c72b25a466fca70cd0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.882932 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.885173 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.890981 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.892319 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.893930 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fb0ef95-0c63-4187-8c79-0c487fefa04e-logs\") pod \"glance-default-external-api-0\" (UID: \"4fb0ef95-0c63-4187-8c79-0c487fefa04e\") " pod="openstack/glance-default-external-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.893959 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fb0ef95-0c63-4187-8c79-0c487fefa04e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4fb0ef95-0c63-4187-8c79-0c487fefa04e\") " pod="openstack/glance-default-external-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.893982 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4b7t\" (UniqueName: \"kubernetes.io/projected/4fb0ef95-0c63-4187-8c79-0c487fefa04e-kube-api-access-t4b7t\") pod \"glance-default-external-api-0\" (UID: \"4fb0ef95-0c63-4187-8c79-0c487fefa04e\") " pod="openstack/glance-default-external-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.894020 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fb0ef95-0c63-4187-8c79-0c487fefa04e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4fb0ef95-0c63-4187-8c79-0c487fefa04e\") " pod="openstack/glance-default-external-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.894038 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fb0ef95-0c63-4187-8c79-0c487fefa04e-config-data\") pod \"glance-default-external-api-0\" (UID: \"4fb0ef95-0c63-4187-8c79-0c487fefa04e\") " pod="openstack/glance-default-external-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.894096 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4fb0ef95-0c63-4187-8c79-0c487fefa04e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4fb0ef95-0c63-4187-8c79-0c487fefa04e\") " pod="openstack/glance-default-external-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.894131 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"4fb0ef95-0c63-4187-8c79-0c487fefa04e\") " pod="openstack/glance-default-external-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.894180 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fb0ef95-0c63-4187-8c79-0c487fefa04e-scripts\") pod \"glance-default-external-api-0\" (UID: \"4fb0ef95-0c63-4187-8c79-0c487fefa04e\") " pod="openstack/glance-default-external-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.895214 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fb0ef95-0c63-4187-8c79-0c487fefa04e-logs\") pod \"glance-default-external-api-0\" (UID: \"4fb0ef95-0c63-4187-8c79-0c487fefa04e\") " pod="openstack/glance-default-external-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.900417 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4fb0ef95-0c63-4187-8c79-0c487fefa04e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4fb0ef95-0c63-4187-8c79-0c487fefa04e\") " pod="openstack/glance-default-external-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.900961 4752 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"4fb0ef95-0c63-4187-8c79-0c487fefa04e\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.912465 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fb0ef95-0c63-4187-8c79-0c487fefa04e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4fb0ef95-0c63-4187-8c79-0c487fefa04e\") " pod="openstack/glance-default-external-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.913261 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fb0ef95-0c63-4187-8c79-0c487fefa04e-config-data\") pod \"glance-default-external-api-0\" (UID: \"4fb0ef95-0c63-4187-8c79-0c487fefa04e\") " pod="openstack/glance-default-external-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.918107 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fb0ef95-0c63-4187-8c79-0c487fefa04e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4fb0ef95-0c63-4187-8c79-0c487fefa04e\") " pod="openstack/glance-default-external-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.943095 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.943303 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fb0ef95-0c63-4187-8c79-0c487fefa04e-scripts\") pod \"glance-default-external-api-0\" (UID: \"4fb0ef95-0c63-4187-8c79-0c487fefa04e\") " pod="openstack/glance-default-external-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.948053 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4b7t\" (UniqueName: \"kubernetes.io/projected/4fb0ef95-0c63-4187-8c79-0c487fefa04e-kube-api-access-t4b7t\") pod \"glance-default-external-api-0\" (UID: \"4fb0ef95-0c63-4187-8c79-0c487fefa04e\") " pod="openstack/glance-default-external-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.951717 4752 scope.go:117] "RemoveContainer" containerID="4d303f0ee7f28a29245b9c515d9656433fb564e5975e66e1df82bdddf2dad8f8" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.952398 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.252848483 podStartE2EDuration="15.952357588s" podCreationTimestamp="2026-01-22 10:45:27 +0000 UTC" firstStartedPulling="2026-01-22 10:45:28.926870602 +0000 UTC m=+1208.156813510" lastFinishedPulling="2026-01-22 10:45:41.626379707 +0000 UTC m=+1220.856322615" observedRunningTime="2026-01-22 10:45:42.835187764 +0000 UTC m=+1222.065130682" watchObservedRunningTime="2026-01-22 10:45:42.952357588 +0000 UTC m=+1222.182300496" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.962984 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"4fb0ef95-0c63-4187-8c79-0c487fefa04e\") " pod="openstack/glance-default-external-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.987186 4752 scope.go:117] "RemoveContainer" containerID="54be2e363e89413c2d58353d464b750d052099d2757fbd887fd4719dc710a8d2" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.996123 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/960fb14d-7176-4480-8d3d-8cb783c60370-config-data\") pod \"glance-default-internal-api-0\" (UID: \"960fb14d-7176-4480-8d3d-8cb783c60370\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.996189 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"960fb14d-7176-4480-8d3d-8cb783c60370\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.996217 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5vkt\" (UniqueName: \"kubernetes.io/projected/960fb14d-7176-4480-8d3d-8cb783c60370-kube-api-access-z5vkt\") pod \"glance-default-internal-api-0\" (UID: \"960fb14d-7176-4480-8d3d-8cb783c60370\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.996255 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/960fb14d-7176-4480-8d3d-8cb783c60370-logs\") pod \"glance-default-internal-api-0\" (UID: \"960fb14d-7176-4480-8d3d-8cb783c60370\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.996301 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/960fb14d-7176-4480-8d3d-8cb783c60370-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"960fb14d-7176-4480-8d3d-8cb783c60370\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.996355 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/960fb14d-7176-4480-8d3d-8cb783c60370-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"960fb14d-7176-4480-8d3d-8cb783c60370\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.996382 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/960fb14d-7176-4480-8d3d-8cb783c60370-scripts\") pod \"glance-default-internal-api-0\" (UID: \"960fb14d-7176-4480-8d3d-8cb783c60370\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:45:42 crc kubenswrapper[4752]: I0122 10:45:42.996498 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960fb14d-7176-4480-8d3d-8cb783c60370-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"960fb14d-7176-4480-8d3d-8cb783c60370\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:42.999712 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.013992 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.030513 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.033678 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.037702 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.042334 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.064569 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.088440 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.098817 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960fb14d-7176-4480-8d3d-8cb783c60370-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"960fb14d-7176-4480-8d3d-8cb783c60370\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.098904 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/960fb14d-7176-4480-8d3d-8cb783c60370-config-data\") pod \"glance-default-internal-api-0\" (UID: \"960fb14d-7176-4480-8d3d-8cb783c60370\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.098929 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"960fb14d-7176-4480-8d3d-8cb783c60370\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.098950 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5vkt\" (UniqueName: \"kubernetes.io/projected/960fb14d-7176-4480-8d3d-8cb783c60370-kube-api-access-z5vkt\") pod \"glance-default-internal-api-0\" (UID: \"960fb14d-7176-4480-8d3d-8cb783c60370\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.098987 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/960fb14d-7176-4480-8d3d-8cb783c60370-logs\") pod \"glance-default-internal-api-0\" (UID: \"960fb14d-7176-4480-8d3d-8cb783c60370\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.099037 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/960fb14d-7176-4480-8d3d-8cb783c60370-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"960fb14d-7176-4480-8d3d-8cb783c60370\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.099135 4752 scope.go:117] "RemoveContainer" containerID="a716c6a764ce27358ef039d89ad40c367c143fe72cf879f9e4ce33e32e847757" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.101142 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/960fb14d-7176-4480-8d3d-8cb783c60370-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"960fb14d-7176-4480-8d3d-8cb783c60370\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.101156 4752 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"960fb14d-7176-4480-8d3d-8cb783c60370\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.101433 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/960fb14d-7176-4480-8d3d-8cb783c60370-logs\") pod \"glance-default-internal-api-0\" (UID: \"960fb14d-7176-4480-8d3d-8cb783c60370\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.099103 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/960fb14d-7176-4480-8d3d-8cb783c60370-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"960fb14d-7176-4480-8d3d-8cb783c60370\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:45:43 crc kubenswrapper[4752]: E0122 10:45:43.104083 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(806e176d-686f-4523-822c-f519f6a6076d)\"" pod="openstack/watcher-decision-engine-0" podUID="806e176d-686f-4523-822c-f519f6a6076d" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.112014 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/960fb14d-7176-4480-8d3d-8cb783c60370-scripts\") pod \"glance-default-internal-api-0\" (UID: \"960fb14d-7176-4480-8d3d-8cb783c60370\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.115555 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960fb14d-7176-4480-8d3d-8cb783c60370-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"960fb14d-7176-4480-8d3d-8cb783c60370\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.136805 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/960fb14d-7176-4480-8d3d-8cb783c60370-scripts\") pod \"glance-default-internal-api-0\" (UID: \"960fb14d-7176-4480-8d3d-8cb783c60370\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.146699 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5vkt\" (UniqueName: \"kubernetes.io/projected/960fb14d-7176-4480-8d3d-8cb783c60370-kube-api-access-z5vkt\") pod \"glance-default-internal-api-0\" (UID: \"960fb14d-7176-4480-8d3d-8cb783c60370\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.162082 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/960fb14d-7176-4480-8d3d-8cb783c60370-config-data\") pod \"glance-default-internal-api-0\" (UID: \"960fb14d-7176-4480-8d3d-8cb783c60370\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.165949 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b42ddad-62b5-482e-b7e4-015f4e138979" path="/var/lib/kubelet/pods/3b42ddad-62b5-482e-b7e4-015f4e138979/volumes" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.168308 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="835169c7-e182-4674-adc7-18ef50e6a906" path="/var/lib/kubelet/pods/835169c7-e182-4674-adc7-18ef50e6a906/volumes" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.169645 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd370da5-83df-42ba-a822-7cff763d174b" path="/var/lib/kubelet/pods/fd370da5-83df-42ba-a822-7cff763d174b/volumes" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.170002 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/960fb14d-7176-4480-8d3d-8cb783c60370-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"960fb14d-7176-4480-8d3d-8cb783c60370\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.180411 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"960fb14d-7176-4480-8d3d-8cb783c60370\") " pod="openstack/glance-default-internal-api-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.218416 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28e65413-9bab-4f8f-8cc4-15d80597fa3c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\") " pod="openstack/ceilometer-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.218506 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrvgn\" (UniqueName: \"kubernetes.io/projected/28e65413-9bab-4f8f-8cc4-15d80597fa3c-kube-api-access-vrvgn\") pod \"ceilometer-0\" (UID: \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\") " pod="openstack/ceilometer-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.218645 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28e65413-9bab-4f8f-8cc4-15d80597fa3c-scripts\") pod \"ceilometer-0\" (UID: \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\") " pod="openstack/ceilometer-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.218701 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28e65413-9bab-4f8f-8cc4-15d80597fa3c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\") " pod="openstack/ceilometer-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.218930 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28e65413-9bab-4f8f-8cc4-15d80597fa3c-run-httpd\") pod \"ceilometer-0\" (UID: \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\") " pod="openstack/ceilometer-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.218996 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28e65413-9bab-4f8f-8cc4-15d80597fa3c-log-httpd\") pod \"ceilometer-0\" (UID: \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\") " pod="openstack/ceilometer-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.219057 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28e65413-9bab-4f8f-8cc4-15d80597fa3c-config-data\") pod \"ceilometer-0\" (UID: \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\") " pod="openstack/ceilometer-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.253660 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.322257 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28e65413-9bab-4f8f-8cc4-15d80597fa3c-scripts\") pod \"ceilometer-0\" (UID: \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\") " pod="openstack/ceilometer-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.322331 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28e65413-9bab-4f8f-8cc4-15d80597fa3c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\") " pod="openstack/ceilometer-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.322397 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28e65413-9bab-4f8f-8cc4-15d80597fa3c-run-httpd\") pod \"ceilometer-0\" (UID: \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\") " pod="openstack/ceilometer-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.322431 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28e65413-9bab-4f8f-8cc4-15d80597fa3c-log-httpd\") pod \"ceilometer-0\" (UID: \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\") " pod="openstack/ceilometer-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.322464 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28e65413-9bab-4f8f-8cc4-15d80597fa3c-config-data\") pod \"ceilometer-0\" (UID: \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\") " pod="openstack/ceilometer-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.322510 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28e65413-9bab-4f8f-8cc4-15d80597fa3c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\") " pod="openstack/ceilometer-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.322534 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrvgn\" (UniqueName: \"kubernetes.io/projected/28e65413-9bab-4f8f-8cc4-15d80597fa3c-kube-api-access-vrvgn\") pod \"ceilometer-0\" (UID: \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\") " pod="openstack/ceilometer-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.323086 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28e65413-9bab-4f8f-8cc4-15d80597fa3c-log-httpd\") pod \"ceilometer-0\" (UID: \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\") " pod="openstack/ceilometer-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.323086 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28e65413-9bab-4f8f-8cc4-15d80597fa3c-run-httpd\") pod \"ceilometer-0\" (UID: \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\") " pod="openstack/ceilometer-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.328310 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28e65413-9bab-4f8f-8cc4-15d80597fa3c-scripts\") pod \"ceilometer-0\" (UID: \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\") " pod="openstack/ceilometer-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.328679 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28e65413-9bab-4f8f-8cc4-15d80597fa3c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\") " pod="openstack/ceilometer-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.331098 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28e65413-9bab-4f8f-8cc4-15d80597fa3c-config-data\") pod \"ceilometer-0\" (UID: \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\") " pod="openstack/ceilometer-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.336697 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28e65413-9bab-4f8f-8cc4-15d80597fa3c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\") " pod="openstack/ceilometer-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.346819 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrvgn\" (UniqueName: \"kubernetes.io/projected/28e65413-9bab-4f8f-8cc4-15d80597fa3c-kube-api-access-vrvgn\") pod \"ceilometer-0\" (UID: \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\") " pod="openstack/ceilometer-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.350681 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:45:43 crc kubenswrapper[4752]: I0122 10:45:43.831719 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:45:44 crc kubenswrapper[4752]: I0122 10:45:44.692283 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28e65413-9bab-4f8f-8cc4-15d80597fa3c","Type":"ContainerStarted","Data":"59f4abc7c47799056a58e9e7248cf318cb65c55960ada283aa95448f3961194e"} Jan 22 10:45:44 crc kubenswrapper[4752]: I0122 10:45:44.692743 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28e65413-9bab-4f8f-8cc4-15d80597fa3c","Type":"ContainerStarted","Data":"655acc7db549dbb47e1d299a66a340ac6746c35a78941750aca4ef959c939fdb"} Jan 22 10:45:44 crc kubenswrapper[4752]: I0122 10:45:44.695209 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6b4f844d5-k72rp" event={"ID":"a18ac410-73d6-4aa5-b7d5-1afb991bcbc1","Type":"ContainerStarted","Data":"20cd2eac3136ec01c001532b3852840a5ff3b453f0130fc88b3463e5b10ddcce"} Jan 22 10:45:44 crc kubenswrapper[4752]: I0122 10:45:44.695246 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6b4f844d5-k72rp" event={"ID":"a18ac410-73d6-4aa5-b7d5-1afb991bcbc1","Type":"ContainerStarted","Data":"bfefc3f8ee6ba4cbd8098ab92b91eae48d867ce1f438d36d307d309cfdf9640f"} Jan 22 10:45:44 crc kubenswrapper[4752]: I0122 10:45:44.696201 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6b4f844d5-k72rp" Jan 22 10:45:44 crc kubenswrapper[4752]: I0122 10:45:44.696227 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6b4f844d5-k72rp" Jan 22 10:45:44 crc kubenswrapper[4752]: I0122 10:45:44.704745 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 10:45:44 crc kubenswrapper[4752]: I0122 10:45:44.745755 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6b4f844d5-k72rp" podStartSLOduration=7.745720142 podStartE2EDuration="7.745720142s" podCreationTimestamp="2026-01-22 10:45:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:45:44.720490149 +0000 UTC m=+1223.950433057" watchObservedRunningTime="2026-01-22 10:45:44.745720142 +0000 UTC m=+1223.975663050" Jan 22 10:45:44 crc kubenswrapper[4752]: I0122 10:45:44.820151 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 10:45:44 crc kubenswrapper[4752]: W0122 10:45:44.822487 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fb0ef95_0c63_4187_8c79_0c487fefa04e.slice/crio-593e24a0d2b57a2525658fa8880a0aef515eeb285ae8c8fcaa1b04f7ee44d1e4 WatchSource:0}: Error finding container 593e24a0d2b57a2525658fa8880a0aef515eeb285ae8c8fcaa1b04f7ee44d1e4: Status 404 returned error can't find the container with id 593e24a0d2b57a2525658fa8880a0aef515eeb285ae8c8fcaa1b04f7ee44d1e4 Jan 22 10:45:45 crc kubenswrapper[4752]: I0122 10:45:45.489351 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:45:45 crc kubenswrapper[4752]: I0122 10:45:45.724269 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4fb0ef95-0c63-4187-8c79-0c487fefa04e","Type":"ContainerStarted","Data":"6c1ebb55a28d5dc70b485605e63f4d63e40d3e9310edff8ac15c4dbdaf90be1a"} Jan 22 10:45:45 crc kubenswrapper[4752]: I0122 10:45:45.724344 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4fb0ef95-0c63-4187-8c79-0c487fefa04e","Type":"ContainerStarted","Data":"593e24a0d2b57a2525658fa8880a0aef515eeb285ae8c8fcaa1b04f7ee44d1e4"} Jan 22 10:45:45 crc kubenswrapper[4752]: I0122 10:45:45.732034 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28e65413-9bab-4f8f-8cc4-15d80597fa3c","Type":"ContainerStarted","Data":"17bceccaefb6658ffd9ef20d6a216e6c5384511375aa2689962331fd36a4e9a5"} Jan 22 10:45:45 crc kubenswrapper[4752]: I0122 10:45:45.735720 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"960fb14d-7176-4480-8d3d-8cb783c60370","Type":"ContainerStarted","Data":"edb7a69e27f71ce19e6661d149f5d5f1b41599c2b4870bbb23618093fd5ea1bc"} Jan 22 10:45:45 crc kubenswrapper[4752]: I0122 10:45:45.735806 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"960fb14d-7176-4480-8d3d-8cb783c60370","Type":"ContainerStarted","Data":"a0ee6aa8ad6b14651300b9921affb6d2789808022173fcf2604af0066d021883"} Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.223253 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-pdjvk"] Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.226251 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pdjvk" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.235042 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-pdjvk"] Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.327739 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-9rmfm"] Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.334089 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9rmfm" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.342836 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-9rmfm"] Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.371222 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d9c0ce2-be7f-447d-b25f-2f4842f3e728-operator-scripts\") pod \"nova-api-db-create-pdjvk\" (UID: \"4d9c0ce2-be7f-447d-b25f-2f4842f3e728\") " pod="openstack/nova-api-db-create-pdjvk" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.371367 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwrr8\" (UniqueName: \"kubernetes.io/projected/4d9c0ce2-be7f-447d-b25f-2f4842f3e728-kube-api-access-pwrr8\") pod \"nova-api-db-create-pdjvk\" (UID: \"4d9c0ce2-be7f-447d-b25f-2f4842f3e728\") " pod="openstack/nova-api-db-create-pdjvk" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.427499 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-nl2ct"] Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.429335 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-nl2ct" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.441947 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-c5fa-account-create-update-w7sxp"] Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.443986 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c5fa-account-create-update-w7sxp" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.446706 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.459988 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c5fa-account-create-update-w7sxp"] Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.472618 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-nl2ct"] Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.474953 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d9c0ce2-be7f-447d-b25f-2f4842f3e728-operator-scripts\") pod \"nova-api-db-create-pdjvk\" (UID: \"4d9c0ce2-be7f-447d-b25f-2f4842f3e728\") " pod="openstack/nova-api-db-create-pdjvk" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.475065 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwrr8\" (UniqueName: \"kubernetes.io/projected/4d9c0ce2-be7f-447d-b25f-2f4842f3e728-kube-api-access-pwrr8\") pod \"nova-api-db-create-pdjvk\" (UID: \"4d9c0ce2-be7f-447d-b25f-2f4842f3e728\") " pod="openstack/nova-api-db-create-pdjvk" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.475106 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac49828-ebf2-410f-8bfd-37f8840d141d-operator-scripts\") pod \"nova-cell0-db-create-9rmfm\" (UID: \"dac49828-ebf2-410f-8bfd-37f8840d141d\") " pod="openstack/nova-cell0-db-create-9rmfm" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.475140 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x45f\" (UniqueName: \"kubernetes.io/projected/dac49828-ebf2-410f-8bfd-37f8840d141d-kube-api-access-5x45f\") pod \"nova-cell0-db-create-9rmfm\" (UID: \"dac49828-ebf2-410f-8bfd-37f8840d141d\") " pod="openstack/nova-cell0-db-create-9rmfm" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.476198 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d9c0ce2-be7f-447d-b25f-2f4842f3e728-operator-scripts\") pod \"nova-api-db-create-pdjvk\" (UID: \"4d9c0ce2-be7f-447d-b25f-2f4842f3e728\") " pod="openstack/nova-api-db-create-pdjvk" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.510129 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwrr8\" (UniqueName: \"kubernetes.io/projected/4d9c0ce2-be7f-447d-b25f-2f4842f3e728-kube-api-access-pwrr8\") pod \"nova-api-db-create-pdjvk\" (UID: \"4d9c0ce2-be7f-447d-b25f-2f4842f3e728\") " pod="openstack/nova-api-db-create-pdjvk" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.545232 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pdjvk" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.577640 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13730739-d528-4830-8bad-72e01aa444fa-operator-scripts\") pod \"nova-api-c5fa-account-create-update-w7sxp\" (UID: \"13730739-d528-4830-8bad-72e01aa444fa\") " pod="openstack/nova-api-c5fa-account-create-update-w7sxp" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.577797 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dc961e1-8eef-4fc2-a8da-fd17a08756f8-operator-scripts\") pod \"nova-cell1-db-create-nl2ct\" (UID: \"3dc961e1-8eef-4fc2-a8da-fd17a08756f8\") " pod="openstack/nova-cell1-db-create-nl2ct" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.577837 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cztqg\" (UniqueName: \"kubernetes.io/projected/13730739-d528-4830-8bad-72e01aa444fa-kube-api-access-cztqg\") pod \"nova-api-c5fa-account-create-update-w7sxp\" (UID: \"13730739-d528-4830-8bad-72e01aa444fa\") " pod="openstack/nova-api-c5fa-account-create-update-w7sxp" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.577937 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5fbq\" (UniqueName: \"kubernetes.io/projected/3dc961e1-8eef-4fc2-a8da-fd17a08756f8-kube-api-access-s5fbq\") pod \"nova-cell1-db-create-nl2ct\" (UID: \"3dc961e1-8eef-4fc2-a8da-fd17a08756f8\") " pod="openstack/nova-cell1-db-create-nl2ct" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.577995 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac49828-ebf2-410f-8bfd-37f8840d141d-operator-scripts\") pod \"nova-cell0-db-create-9rmfm\" (UID: \"dac49828-ebf2-410f-8bfd-37f8840d141d\") " pod="openstack/nova-cell0-db-create-9rmfm" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.578031 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x45f\" (UniqueName: \"kubernetes.io/projected/dac49828-ebf2-410f-8bfd-37f8840d141d-kube-api-access-5x45f\") pod \"nova-cell0-db-create-9rmfm\" (UID: \"dac49828-ebf2-410f-8bfd-37f8840d141d\") " pod="openstack/nova-cell0-db-create-9rmfm" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.579419 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac49828-ebf2-410f-8bfd-37f8840d141d-operator-scripts\") pod \"nova-cell0-db-create-9rmfm\" (UID: \"dac49828-ebf2-410f-8bfd-37f8840d141d\") " pod="openstack/nova-cell0-db-create-9rmfm" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.604091 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x45f\" (UniqueName: \"kubernetes.io/projected/dac49828-ebf2-410f-8bfd-37f8840d141d-kube-api-access-5x45f\") pod \"nova-cell0-db-create-9rmfm\" (UID: \"dac49828-ebf2-410f-8bfd-37f8840d141d\") " pod="openstack/nova-cell0-db-create-9rmfm" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.651261 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9rmfm" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.663648 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-fb42-account-create-update-gmwjz"] Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.665197 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fb42-account-create-update-gmwjz" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.678570 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-fb42-account-create-update-gmwjz"] Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.686392 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13730739-d528-4830-8bad-72e01aa444fa-operator-scripts\") pod \"nova-api-c5fa-account-create-update-w7sxp\" (UID: \"13730739-d528-4830-8bad-72e01aa444fa\") " pod="openstack/nova-api-c5fa-account-create-update-w7sxp" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.686511 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dc961e1-8eef-4fc2-a8da-fd17a08756f8-operator-scripts\") pod \"nova-cell1-db-create-nl2ct\" (UID: \"3dc961e1-8eef-4fc2-a8da-fd17a08756f8\") " pod="openstack/nova-cell1-db-create-nl2ct" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.686545 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cztqg\" (UniqueName: \"kubernetes.io/projected/13730739-d528-4830-8bad-72e01aa444fa-kube-api-access-cztqg\") pod \"nova-api-c5fa-account-create-update-w7sxp\" (UID: \"13730739-d528-4830-8bad-72e01aa444fa\") " pod="openstack/nova-api-c5fa-account-create-update-w7sxp" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.686633 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5fbq\" (UniqueName: \"kubernetes.io/projected/3dc961e1-8eef-4fc2-a8da-fd17a08756f8-kube-api-access-s5fbq\") pod \"nova-cell1-db-create-nl2ct\" (UID: \"3dc961e1-8eef-4fc2-a8da-fd17a08756f8\") " pod="openstack/nova-cell1-db-create-nl2ct" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.688484 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13730739-d528-4830-8bad-72e01aa444fa-operator-scripts\") pod \"nova-api-c5fa-account-create-update-w7sxp\" (UID: \"13730739-d528-4830-8bad-72e01aa444fa\") " pod="openstack/nova-api-c5fa-account-create-update-w7sxp" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.689169 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dc961e1-8eef-4fc2-a8da-fd17a08756f8-operator-scripts\") pod \"nova-cell1-db-create-nl2ct\" (UID: \"3dc961e1-8eef-4fc2-a8da-fd17a08756f8\") " pod="openstack/nova-cell1-db-create-nl2ct" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.708049 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.753745 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5fbq\" (UniqueName: \"kubernetes.io/projected/3dc961e1-8eef-4fc2-a8da-fd17a08756f8-kube-api-access-s5fbq\") pod \"nova-cell1-db-create-nl2ct\" (UID: \"3dc961e1-8eef-4fc2-a8da-fd17a08756f8\") " pod="openstack/nova-cell1-db-create-nl2ct" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.776326 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cztqg\" (UniqueName: \"kubernetes.io/projected/13730739-d528-4830-8bad-72e01aa444fa-kube-api-access-cztqg\") pod \"nova-api-c5fa-account-create-update-w7sxp\" (UID: \"13730739-d528-4830-8bad-72e01aa444fa\") " pod="openstack/nova-api-c5fa-account-create-update-w7sxp" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.811061 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp5zb\" (UniqueName: \"kubernetes.io/projected/96960c39-6790-47bb-9f2d-9bc3aec15e70-kube-api-access-rp5zb\") pod \"nova-cell0-fb42-account-create-update-gmwjz\" (UID: \"96960c39-6790-47bb-9f2d-9bc3aec15e70\") " pod="openstack/nova-cell0-fb42-account-create-update-gmwjz" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.811187 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96960c39-6790-47bb-9f2d-9bc3aec15e70-operator-scripts\") pod \"nova-cell0-fb42-account-create-update-gmwjz\" (UID: \"96960c39-6790-47bb-9f2d-9bc3aec15e70\") " pod="openstack/nova-cell0-fb42-account-create-update-gmwjz" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.836696 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"960fb14d-7176-4480-8d3d-8cb783c60370","Type":"ContainerStarted","Data":"62eec8bac20d3855772a872963d98322680dc968695f9da0babb4e6e92382f48"} Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.890668 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4fb0ef95-0c63-4187-8c79-0c487fefa04e","Type":"ContainerStarted","Data":"1a38d16f5196f6901665976594dd40e2a8486128e626e71d9b1a7d01b4d571a9"} Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.914723 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp5zb\" (UniqueName: \"kubernetes.io/projected/96960c39-6790-47bb-9f2d-9bc3aec15e70-kube-api-access-rp5zb\") pod \"nova-cell0-fb42-account-create-update-gmwjz\" (UID: \"96960c39-6790-47bb-9f2d-9bc3aec15e70\") " pod="openstack/nova-cell0-fb42-account-create-update-gmwjz" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.914815 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96960c39-6790-47bb-9f2d-9bc3aec15e70-operator-scripts\") pod \"nova-cell0-fb42-account-create-update-gmwjz\" (UID: \"96960c39-6790-47bb-9f2d-9bc3aec15e70\") " pod="openstack/nova-cell0-fb42-account-create-update-gmwjz" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.915765 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96960c39-6790-47bb-9f2d-9bc3aec15e70-operator-scripts\") pod \"nova-cell0-fb42-account-create-update-gmwjz\" (UID: \"96960c39-6790-47bb-9f2d-9bc3aec15e70\") " pod="openstack/nova-cell0-fb42-account-create-update-gmwjz" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.929198 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28e65413-9bab-4f8f-8cc4-15d80597fa3c","Type":"ContainerStarted","Data":"6ba9d12dd4e7fff64cc1425154dacc1044aa90a5323cf91059223a0050677f08"} Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.957052 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp5zb\" (UniqueName: \"kubernetes.io/projected/96960c39-6790-47bb-9f2d-9bc3aec15e70-kube-api-access-rp5zb\") pod \"nova-cell0-fb42-account-create-update-gmwjz\" (UID: \"96960c39-6790-47bb-9f2d-9bc3aec15e70\") " pod="openstack/nova-cell0-fb42-account-create-update-gmwjz" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.978936 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-301c-account-create-update-w7p2q"] Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.981662 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-301c-account-create-update-w7p2q" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.984383 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.990541 4752 generic.go:334] "Generic (PLEG): container finished" podID="5ba449ad-098c-4918-9403-750b0c29ee93" containerID="ee95aea4555325bf07e8ed3255ffe4d6d9d9f485d9c00e2707350ed5c6af4b29" exitCode=137 Jan 22 10:45:48 crc kubenswrapper[4752]: I0122 10:45:48.990637 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c77556c9d-7cqmw" event={"ID":"5ba449ad-098c-4918-9403-750b0c29ee93","Type":"ContainerDied","Data":"ee95aea4555325bf07e8ed3255ffe4d6d9d9f485d9c00e2707350ed5c6af4b29"} Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.050287 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-nl2ct" Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.052328 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-301c-account-create-update-w7p2q"] Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.076987 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.07695793 podStartE2EDuration="7.07695793s" podCreationTimestamp="2026-01-22 10:45:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:45:48.890204174 +0000 UTC m=+1228.120147082" watchObservedRunningTime="2026-01-22 10:45:49.07695793 +0000 UTC m=+1228.306900838" Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.085938 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c5fa-account-create-update-w7sxp" Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.113087 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.113061069 podStartE2EDuration="7.113061069s" podCreationTimestamp="2026-01-22 10:45:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:45:48.929503195 +0000 UTC m=+1228.159446103" watchObservedRunningTime="2026-01-22 10:45:49.113061069 +0000 UTC m=+1228.343003977" Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.129323 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vpmv\" (UniqueName: \"kubernetes.io/projected/86f47086-110a-4d61-a140-ce98aeb0e321-kube-api-access-5vpmv\") pod \"nova-cell1-301c-account-create-update-w7p2q\" (UID: \"86f47086-110a-4d61-a140-ce98aeb0e321\") " pod="openstack/nova-cell1-301c-account-create-update-w7p2q" Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.129460 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86f47086-110a-4d61-a140-ce98aeb0e321-operator-scripts\") pod \"nova-cell1-301c-account-create-update-w7p2q\" (UID: \"86f47086-110a-4d61-a140-ce98aeb0e321\") " pod="openstack/nova-cell1-301c-account-create-update-w7p2q" Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.182618 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fb42-account-create-update-gmwjz" Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.232116 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86f47086-110a-4d61-a140-ce98aeb0e321-operator-scripts\") pod \"nova-cell1-301c-account-create-update-w7p2q\" (UID: \"86f47086-110a-4d61-a140-ce98aeb0e321\") " pod="openstack/nova-cell1-301c-account-create-update-w7p2q" Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.232365 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vpmv\" (UniqueName: \"kubernetes.io/projected/86f47086-110a-4d61-a140-ce98aeb0e321-kube-api-access-5vpmv\") pod \"nova-cell1-301c-account-create-update-w7p2q\" (UID: \"86f47086-110a-4d61-a140-ce98aeb0e321\") " pod="openstack/nova-cell1-301c-account-create-update-w7p2q" Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.234286 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86f47086-110a-4d61-a140-ce98aeb0e321-operator-scripts\") pod \"nova-cell1-301c-account-create-update-w7p2q\" (UID: \"86f47086-110a-4d61-a140-ce98aeb0e321\") " pod="openstack/nova-cell1-301c-account-create-update-w7p2q" Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.269809 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vpmv\" (UniqueName: \"kubernetes.io/projected/86f47086-110a-4d61-a140-ce98aeb0e321-kube-api-access-5vpmv\") pod \"nova-cell1-301c-account-create-update-w7p2q\" (UID: \"86f47086-110a-4d61-a140-ce98aeb0e321\") " pod="openstack/nova-cell1-301c-account-create-update-w7p2q" Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.315210 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-301c-account-create-update-w7p2q" Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.344433 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c77556c9d-7cqmw" Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.439020 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ba449ad-098c-4918-9403-750b0c29ee93-horizon-tls-certs\") pod \"5ba449ad-098c-4918-9403-750b0c29ee93\" (UID: \"5ba449ad-098c-4918-9403-750b0c29ee93\") " Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.439601 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ba449ad-098c-4918-9403-750b0c29ee93-logs\") pod \"5ba449ad-098c-4918-9403-750b0c29ee93\" (UID: \"5ba449ad-098c-4918-9403-750b0c29ee93\") " Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.439741 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ba449ad-098c-4918-9403-750b0c29ee93-scripts\") pod \"5ba449ad-098c-4918-9403-750b0c29ee93\" (UID: \"5ba449ad-098c-4918-9403-750b0c29ee93\") " Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.439772 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcp6l\" (UniqueName: \"kubernetes.io/projected/5ba449ad-098c-4918-9403-750b0c29ee93-kube-api-access-wcp6l\") pod \"5ba449ad-098c-4918-9403-750b0c29ee93\" (UID: \"5ba449ad-098c-4918-9403-750b0c29ee93\") " Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.439894 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5ba449ad-098c-4918-9403-750b0c29ee93-config-data\") pod \"5ba449ad-098c-4918-9403-750b0c29ee93\" (UID: \"5ba449ad-098c-4918-9403-750b0c29ee93\") " Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.439968 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ba449ad-098c-4918-9403-750b0c29ee93-combined-ca-bundle\") pod \"5ba449ad-098c-4918-9403-750b0c29ee93\" (UID: \"5ba449ad-098c-4918-9403-750b0c29ee93\") " Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.440005 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5ba449ad-098c-4918-9403-750b0c29ee93-horizon-secret-key\") pod \"5ba449ad-098c-4918-9403-750b0c29ee93\" (UID: \"5ba449ad-098c-4918-9403-750b0c29ee93\") " Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.450092 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ba449ad-098c-4918-9403-750b0c29ee93-logs" (OuterVolumeSpecName: "logs") pod "5ba449ad-098c-4918-9403-750b0c29ee93" (UID: "5ba449ad-098c-4918-9403-750b0c29ee93"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.458655 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ba449ad-098c-4918-9403-750b0c29ee93-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "5ba449ad-098c-4918-9403-750b0c29ee93" (UID: "5ba449ad-098c-4918-9403-750b0c29ee93"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.461418 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ba449ad-098c-4918-9403-750b0c29ee93-kube-api-access-wcp6l" (OuterVolumeSpecName: "kube-api-access-wcp6l") pod "5ba449ad-098c-4918-9403-750b0c29ee93" (UID: "5ba449ad-098c-4918-9403-750b0c29ee93"). InnerVolumeSpecName "kube-api-access-wcp6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.506779 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ba449ad-098c-4918-9403-750b0c29ee93-scripts" (OuterVolumeSpecName: "scripts") pod "5ba449ad-098c-4918-9403-750b0c29ee93" (UID: "5ba449ad-098c-4918-9403-750b0c29ee93"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.508632 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ba449ad-098c-4918-9403-750b0c29ee93-config-data" (OuterVolumeSpecName: "config-data") pod "5ba449ad-098c-4918-9403-750b0c29ee93" (UID: "5ba449ad-098c-4918-9403-750b0c29ee93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.542942 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5ba449ad-098c-4918-9403-750b0c29ee93-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.542979 4752 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5ba449ad-098c-4918-9403-750b0c29ee93-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.542991 4752 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ba449ad-098c-4918-9403-750b0c29ee93-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.543001 4752 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ba449ad-098c-4918-9403-750b0c29ee93-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.543012 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcp6l\" (UniqueName: \"kubernetes.io/projected/5ba449ad-098c-4918-9403-750b0c29ee93-kube-api-access-wcp6l\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.590480 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ba449ad-098c-4918-9403-750b0c29ee93-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ba449ad-098c-4918-9403-750b0c29ee93" (UID: "5ba449ad-098c-4918-9403-750b0c29ee93"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.596505 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ba449ad-098c-4918-9403-750b0c29ee93-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "5ba449ad-098c-4918-9403-750b0c29ee93" (UID: "5ba449ad-098c-4918-9403-750b0c29ee93"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.600136 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-9rmfm"] Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.609982 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-pdjvk"] Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.652021 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ba449ad-098c-4918-9403-750b0c29ee93-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.652070 4752 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ba449ad-098c-4918-9403-750b0c29ee93-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.980000 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c5fa-account-create-update-w7sxp"] Jan 22 10:45:49 crc kubenswrapper[4752]: I0122 10:45:49.992424 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-nl2ct"] Jan 22 10:45:50 crc kubenswrapper[4752]: I0122 10:45:50.020372 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9rmfm" event={"ID":"dac49828-ebf2-410f-8bfd-37f8840d141d","Type":"ContainerStarted","Data":"125186afa044097002ba8012582f08ba97ec4407200ad48e5a12f1d6742ee21a"} Jan 22 10:45:50 crc kubenswrapper[4752]: I0122 10:45:50.021932 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c5fa-account-create-update-w7sxp" event={"ID":"13730739-d528-4830-8bad-72e01aa444fa","Type":"ContainerStarted","Data":"b26e34c4c72f6fec2d4315616e1277ae9cdc1117480b91e754c4162b63178980"} Jan 22 10:45:50 crc kubenswrapper[4752]: I0122 10:45:50.023547 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pdjvk" event={"ID":"4d9c0ce2-be7f-447d-b25f-2f4842f3e728","Type":"ContainerStarted","Data":"7d5b998d8bd4b42d403c15cd271cce1429419a8e379f4f3132f2d709c0c39eae"} Jan 22 10:45:50 crc kubenswrapper[4752]: I0122 10:45:50.031123 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-nl2ct" event={"ID":"3dc961e1-8eef-4fc2-a8da-fd17a08756f8","Type":"ContainerStarted","Data":"6b94366681fc582f333cbf015d0a11ad5108b195633649ef7b48b0fa4542f847"} Jan 22 10:45:50 crc kubenswrapper[4752]: I0122 10:45:50.055796 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c77556c9d-7cqmw" event={"ID":"5ba449ad-098c-4918-9403-750b0c29ee93","Type":"ContainerDied","Data":"0023b916730ca1dad989f345a5d322c8c147416c8178351ba61b02aa9adbd265"} Jan 22 10:45:50 crc kubenswrapper[4752]: I0122 10:45:50.056639 4752 scope.go:117] "RemoveContainer" containerID="9aadbd246544b4c33e87afdab4b26ff2bfd1c7d8c83899d5519375d8b2cfa6d6" Jan 22 10:45:50 crc kubenswrapper[4752]: I0122 10:45:50.055930 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c77556c9d-7cqmw" Jan 22 10:45:50 crc kubenswrapper[4752]: I0122 10:45:50.081256 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-fb42-account-create-update-gmwjz"] Jan 22 10:45:50 crc kubenswrapper[4752]: I0122 10:45:50.096157 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-301c-account-create-update-w7p2q"] Jan 22 10:45:50 crc kubenswrapper[4752]: I0122 10:45:50.125193 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7c77556c9d-7cqmw"] Jan 22 10:45:50 crc kubenswrapper[4752]: I0122 10:45:50.144710 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7c77556c9d-7cqmw"] Jan 22 10:45:50 crc kubenswrapper[4752]: I0122 10:45:50.248601 4752 scope.go:117] "RemoveContainer" containerID="ee95aea4555325bf07e8ed3255ffe4d6d9d9f485d9c00e2707350ed5c6af4b29" Jan 22 10:45:50 crc kubenswrapper[4752]: I0122 10:45:50.918980 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 22 10:45:50 crc kubenswrapper[4752]: I0122 10:45:50.919346 4752 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 22 10:45:50 crc kubenswrapper[4752]: I0122 10:45:50.920368 4752 scope.go:117] "RemoveContainer" containerID="a716c6a764ce27358ef039d89ad40c367c143fe72cf879f9e4ce33e32e847757" Jan 22 10:45:51 crc kubenswrapper[4752]: I0122 10:45:51.062193 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28e65413-9bab-4f8f-8cc4-15d80597fa3c","Type":"ContainerStarted","Data":"fd3e24a10a6bfb838078527b5b1dd225ae1a9a397f8ccef372850997ee864fa7"} Jan 22 10:45:51 crc kubenswrapper[4752]: I0122 10:45:51.062480 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28e65413-9bab-4f8f-8cc4-15d80597fa3c" containerName="ceilometer-central-agent" containerID="cri-o://59f4abc7c47799056a58e9e7248cf318cb65c55960ada283aa95448f3961194e" gracePeriod=30 Jan 22 10:45:51 crc kubenswrapper[4752]: I0122 10:45:51.062892 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 10:45:51 crc kubenswrapper[4752]: I0122 10:45:51.063282 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28e65413-9bab-4f8f-8cc4-15d80597fa3c" containerName="proxy-httpd" containerID="cri-o://fd3e24a10a6bfb838078527b5b1dd225ae1a9a397f8ccef372850997ee864fa7" gracePeriod=30 Jan 22 10:45:51 crc kubenswrapper[4752]: I0122 10:45:51.063374 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28e65413-9bab-4f8f-8cc4-15d80597fa3c" containerName="sg-core" containerID="cri-o://6ba9d12dd4e7fff64cc1425154dacc1044aa90a5323cf91059223a0050677f08" gracePeriod=30 Jan 22 10:45:51 crc kubenswrapper[4752]: I0122 10:45:51.063442 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28e65413-9bab-4f8f-8cc4-15d80597fa3c" containerName="ceilometer-notification-agent" containerID="cri-o://17bceccaefb6658ffd9ef20d6a216e6c5384511375aa2689962331fd36a4e9a5" gracePeriod=30 Jan 22 10:45:51 crc kubenswrapper[4752]: I0122 10:45:51.071329 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9rmfm" event={"ID":"dac49828-ebf2-410f-8bfd-37f8840d141d","Type":"ContainerStarted","Data":"52b0fd97c04749e302d3b84d9a789ebae6a804935036b4c1aa4b8b1bef7f6b72"} Jan 22 10:45:51 crc kubenswrapper[4752]: I0122 10:45:51.079525 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pdjvk" event={"ID":"4d9c0ce2-be7f-447d-b25f-2f4842f3e728","Type":"ContainerStarted","Data":"1b085abfd0322d431c88e079873ce548e193962ad20113ed80534157e59ad3a2"} Jan 22 10:45:51 crc kubenswrapper[4752]: I0122 10:45:51.081396 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fb42-account-create-update-gmwjz" event={"ID":"96960c39-6790-47bb-9f2d-9bc3aec15e70","Type":"ContainerStarted","Data":"185728c30fc4c050bf42edf8d3163838144b6aa6699c61b2b869c1242312bf28"} Jan 22 10:45:51 crc kubenswrapper[4752]: I0122 10:45:51.084580 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-301c-account-create-update-w7p2q" event={"ID":"86f47086-110a-4d61-a140-ce98aeb0e321","Type":"ContainerStarted","Data":"186e10defe703ee53b4674c2cbab0a894b00b3c9e4470baeb77114b668d1ffb5"} Jan 22 10:45:51 crc kubenswrapper[4752]: I0122 10:45:51.089598 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.536191244 podStartE2EDuration="9.089572338s" podCreationTimestamp="2026-01-22 10:45:42 +0000 UTC" firstStartedPulling="2026-01-22 10:45:43.826564293 +0000 UTC m=+1223.056507201" lastFinishedPulling="2026-01-22 10:45:49.379945387 +0000 UTC m=+1228.609888295" observedRunningTime="2026-01-22 10:45:51.084169271 +0000 UTC m=+1230.314112189" watchObservedRunningTime="2026-01-22 10:45:51.089572338 +0000 UTC m=+1230.319515246" Jan 22 10:45:51 crc kubenswrapper[4752]: I0122 10:45:51.159874 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-9rmfm" podStartSLOduration=3.159805847 podStartE2EDuration="3.159805847s" podCreationTimestamp="2026-01-22 10:45:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:45:51.105868593 +0000 UTC m=+1230.335811501" watchObservedRunningTime="2026-01-22 10:45:51.159805847 +0000 UTC m=+1230.389748755" Jan 22 10:45:51 crc kubenswrapper[4752]: I0122 10:45:51.166169 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-pdjvk" podStartSLOduration=3.166147429 podStartE2EDuration="3.166147429s" podCreationTimestamp="2026-01-22 10:45:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:45:51.131669251 +0000 UTC m=+1230.361612159" watchObservedRunningTime="2026-01-22 10:45:51.166147429 +0000 UTC m=+1230.396090337" Jan 22 10:45:51 crc kubenswrapper[4752]: I0122 10:45:51.169575 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ba449ad-098c-4918-9403-750b0c29ee93" path="/var/lib/kubelet/pods/5ba449ad-098c-4918-9403-750b0c29ee93/volumes" Jan 22 10:45:52 crc kubenswrapper[4752]: I0122 10:45:52.097559 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c5fa-account-create-update-w7sxp" event={"ID":"13730739-d528-4830-8bad-72e01aa444fa","Type":"ContainerStarted","Data":"59361de9a7c4ec43013d1bf057312ca321e7fee12c68cedac44854ccff024721"} Jan 22 10:45:52 crc kubenswrapper[4752]: I0122 10:45:52.102606 4752 generic.go:334] "Generic (PLEG): container finished" podID="28e65413-9bab-4f8f-8cc4-15d80597fa3c" containerID="fd3e24a10a6bfb838078527b5b1dd225ae1a9a397f8ccef372850997ee864fa7" exitCode=0 Jan 22 10:45:52 crc kubenswrapper[4752]: I0122 10:45:52.103164 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28e65413-9bab-4f8f-8cc4-15d80597fa3c","Type":"ContainerDied","Data":"fd3e24a10a6bfb838078527b5b1dd225ae1a9a397f8ccef372850997ee864fa7"} Jan 22 10:45:52 crc kubenswrapper[4752]: I0122 10:45:52.592652 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6b4f844d5-k72rp" Jan 22 10:45:52 crc kubenswrapper[4752]: I0122 10:45:52.595010 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6b4f844d5-k72rp" Jan 22 10:45:53 crc kubenswrapper[4752]: I0122 10:45:53.088813 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 22 10:45:53 crc kubenswrapper[4752]: I0122 10:45:53.089235 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 22 10:45:53 crc kubenswrapper[4752]: I0122 10:45:53.128887 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 22 10:45:53 crc kubenswrapper[4752]: I0122 10:45:53.143440 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fb42-account-create-update-gmwjz" event={"ID":"96960c39-6790-47bb-9f2d-9bc3aec15e70","Type":"ContainerStarted","Data":"9029d94e65f557502056abf7f66d890f00d20a41d3b3338943fc50c2f25d60a5"} Jan 22 10:45:53 crc kubenswrapper[4752]: I0122 10:45:53.145464 4752 generic.go:334] "Generic (PLEG): container finished" podID="28e65413-9bab-4f8f-8cc4-15d80597fa3c" containerID="6ba9d12dd4e7fff64cc1425154dacc1044aa90a5323cf91059223a0050677f08" exitCode=2 Jan 22 10:45:53 crc kubenswrapper[4752]: I0122 10:45:53.145591 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28e65413-9bab-4f8f-8cc4-15d80597fa3c","Type":"ContainerDied","Data":"6ba9d12dd4e7fff64cc1425154dacc1044aa90a5323cf91059223a0050677f08"} Jan 22 10:45:53 crc kubenswrapper[4752]: I0122 10:45:53.148672 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-nl2ct" event={"ID":"3dc961e1-8eef-4fc2-a8da-fd17a08756f8","Type":"ContainerStarted","Data":"090092f5caac1739a8408990ae61fc2e9e03bce7c48a426f8d670e51d293979c"} Jan 22 10:45:53 crc kubenswrapper[4752]: I0122 10:45:53.149162 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 10:45:53 crc kubenswrapper[4752]: I0122 10:45:53.153835 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 22 10:45:53 crc kubenswrapper[4752]: I0122 10:45:53.255041 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 22 10:45:53 crc kubenswrapper[4752]: I0122 10:45:53.255144 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 22 10:45:53 crc kubenswrapper[4752]: I0122 10:45:53.300250 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 22 10:45:53 crc kubenswrapper[4752]: I0122 10:45:53.307344 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 22 10:45:54 crc kubenswrapper[4752]: I0122 10:45:54.161788 4752 generic.go:334] "Generic (PLEG): container finished" podID="13730739-d528-4830-8bad-72e01aa444fa" containerID="59361de9a7c4ec43013d1bf057312ca321e7fee12c68cedac44854ccff024721" exitCode=0 Jan 22 10:45:54 crc kubenswrapper[4752]: I0122 10:45:54.162163 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c5fa-account-create-update-w7sxp" event={"ID":"13730739-d528-4830-8bad-72e01aa444fa","Type":"ContainerDied","Data":"59361de9a7c4ec43013d1bf057312ca321e7fee12c68cedac44854ccff024721"} Jan 22 10:45:54 crc kubenswrapper[4752]: I0122 10:45:54.164685 4752 generic.go:334] "Generic (PLEG): container finished" podID="4d9c0ce2-be7f-447d-b25f-2f4842f3e728" containerID="1b085abfd0322d431c88e079873ce548e193962ad20113ed80534157e59ad3a2" exitCode=0 Jan 22 10:45:54 crc kubenswrapper[4752]: I0122 10:45:54.164724 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pdjvk" event={"ID":"4d9c0ce2-be7f-447d-b25f-2f4842f3e728","Type":"ContainerDied","Data":"1b085abfd0322d431c88e079873ce548e193962ad20113ed80534157e59ad3a2"} Jan 22 10:45:54 crc kubenswrapper[4752]: I0122 10:45:54.166076 4752 generic.go:334] "Generic (PLEG): container finished" podID="96960c39-6790-47bb-9f2d-9bc3aec15e70" containerID="9029d94e65f557502056abf7f66d890f00d20a41d3b3338943fc50c2f25d60a5" exitCode=0 Jan 22 10:45:54 crc kubenswrapper[4752]: I0122 10:45:54.166115 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fb42-account-create-update-gmwjz" event={"ID":"96960c39-6790-47bb-9f2d-9bc3aec15e70","Type":"ContainerDied","Data":"9029d94e65f557502056abf7f66d890f00d20a41d3b3338943fc50c2f25d60a5"} Jan 22 10:45:54 crc kubenswrapper[4752]: I0122 10:45:54.167556 4752 generic.go:334] "Generic (PLEG): container finished" podID="86f47086-110a-4d61-a140-ce98aeb0e321" containerID="f760cd267e82d1783e650ca32e8191016404196906843b1f89b1fc4bf2d6f723" exitCode=0 Jan 22 10:45:54 crc kubenswrapper[4752]: I0122 10:45:54.167595 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-301c-account-create-update-w7p2q" event={"ID":"86f47086-110a-4d61-a140-ce98aeb0e321","Type":"ContainerDied","Data":"f760cd267e82d1783e650ca32e8191016404196906843b1f89b1fc4bf2d6f723"} Jan 22 10:45:54 crc kubenswrapper[4752]: I0122 10:45:54.169796 4752 generic.go:334] "Generic (PLEG): container finished" podID="3dc961e1-8eef-4fc2-a8da-fd17a08756f8" containerID="090092f5caac1739a8408990ae61fc2e9e03bce7c48a426f8d670e51d293979c" exitCode=0 Jan 22 10:45:54 crc kubenswrapper[4752]: I0122 10:45:54.169878 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-nl2ct" event={"ID":"3dc961e1-8eef-4fc2-a8da-fd17a08756f8","Type":"ContainerDied","Data":"090092f5caac1739a8408990ae61fc2e9e03bce7c48a426f8d670e51d293979c"} Jan 22 10:45:54 crc kubenswrapper[4752]: I0122 10:45:54.175009 4752 generic.go:334] "Generic (PLEG): container finished" podID="28e65413-9bab-4f8f-8cc4-15d80597fa3c" containerID="17bceccaefb6658ffd9ef20d6a216e6c5384511375aa2689962331fd36a4e9a5" exitCode=0 Jan 22 10:45:54 crc kubenswrapper[4752]: I0122 10:45:54.175068 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28e65413-9bab-4f8f-8cc4-15d80597fa3c","Type":"ContainerDied","Data":"17bceccaefb6658ffd9ef20d6a216e6c5384511375aa2689962331fd36a4e9a5"} Jan 22 10:45:54 crc kubenswrapper[4752]: I0122 10:45:54.177781 4752 generic.go:334] "Generic (PLEG): container finished" podID="dac49828-ebf2-410f-8bfd-37f8840d141d" containerID="52b0fd97c04749e302d3b84d9a789ebae6a804935036b4c1aa4b8b1bef7f6b72" exitCode=0 Jan 22 10:45:54 crc kubenswrapper[4752]: I0122 10:45:54.177877 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9rmfm" event={"ID":"dac49828-ebf2-410f-8bfd-37f8840d141d","Type":"ContainerDied","Data":"52b0fd97c04749e302d3b84d9a789ebae6a804935036b4c1aa4b8b1bef7f6b72"} Jan 22 10:45:54 crc kubenswrapper[4752]: I0122 10:45:54.180144 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"806e176d-686f-4523-822c-f519f6a6076d","Type":"ContainerStarted","Data":"81d429a6da09dc2d168c69b6baf52b7cb15c263cdc9d8e028ff413c25c3fa9d8"} Jan 22 10:45:54 crc kubenswrapper[4752]: I0122 10:45:54.180778 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 10:45:54 crc kubenswrapper[4752]: I0122 10:45:54.180942 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 22 10:45:54 crc kubenswrapper[4752]: I0122 10:45:54.180976 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 22 10:45:55 crc kubenswrapper[4752]: I0122 10:45:55.015746 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 22 10:45:55 crc kubenswrapper[4752]: I0122 10:45:55.028168 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 22 10:45:55 crc kubenswrapper[4752]: I0122 10:45:55.693195 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-nl2ct" Jan 22 10:45:55 crc kubenswrapper[4752]: I0122 10:45:55.760543 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dc961e1-8eef-4fc2-a8da-fd17a08756f8-operator-scripts\") pod \"3dc961e1-8eef-4fc2-a8da-fd17a08756f8\" (UID: \"3dc961e1-8eef-4fc2-a8da-fd17a08756f8\") " Jan 22 10:45:55 crc kubenswrapper[4752]: I0122 10:45:55.760659 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5fbq\" (UniqueName: \"kubernetes.io/projected/3dc961e1-8eef-4fc2-a8da-fd17a08756f8-kube-api-access-s5fbq\") pod \"3dc961e1-8eef-4fc2-a8da-fd17a08756f8\" (UID: \"3dc961e1-8eef-4fc2-a8da-fd17a08756f8\") " Jan 22 10:45:55 crc kubenswrapper[4752]: I0122 10:45:55.761199 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dc961e1-8eef-4fc2-a8da-fd17a08756f8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3dc961e1-8eef-4fc2-a8da-fd17a08756f8" (UID: "3dc961e1-8eef-4fc2-a8da-fd17a08756f8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:45:55 crc kubenswrapper[4752]: I0122 10:45:55.776626 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dc961e1-8eef-4fc2-a8da-fd17a08756f8-kube-api-access-s5fbq" (OuterVolumeSpecName: "kube-api-access-s5fbq") pod "3dc961e1-8eef-4fc2-a8da-fd17a08756f8" (UID: "3dc961e1-8eef-4fc2-a8da-fd17a08756f8"). InnerVolumeSpecName "kube-api-access-s5fbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:45:55 crc kubenswrapper[4752]: I0122 10:45:55.863486 4752 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dc961e1-8eef-4fc2-a8da-fd17a08756f8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:55 crc kubenswrapper[4752]: I0122 10:45:55.863537 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5fbq\" (UniqueName: \"kubernetes.io/projected/3dc961e1-8eef-4fc2-a8da-fd17a08756f8-kube-api-access-s5fbq\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.157533 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c5fa-account-create-update-w7sxp" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.175156 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13730739-d528-4830-8bad-72e01aa444fa-operator-scripts\") pod \"13730739-d528-4830-8bad-72e01aa444fa\" (UID: \"13730739-d528-4830-8bad-72e01aa444fa\") " Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.175899 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cztqg\" (UniqueName: \"kubernetes.io/projected/13730739-d528-4830-8bad-72e01aa444fa-kube-api-access-cztqg\") pod \"13730739-d528-4830-8bad-72e01aa444fa\" (UID: \"13730739-d528-4830-8bad-72e01aa444fa\") " Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.178973 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13730739-d528-4830-8bad-72e01aa444fa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "13730739-d528-4830-8bad-72e01aa444fa" (UID: "13730739-d528-4830-8bad-72e01aa444fa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.183565 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9rmfm" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.189596 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13730739-d528-4830-8bad-72e01aa444fa-kube-api-access-cztqg" (OuterVolumeSpecName: "kube-api-access-cztqg") pod "13730739-d528-4830-8bad-72e01aa444fa" (UID: "13730739-d528-4830-8bad-72e01aa444fa"). InnerVolumeSpecName "kube-api-access-cztqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.191178 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pdjvk" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.247427 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fb42-account-create-update-gmwjz" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.248849 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fb42-account-create-update-gmwjz" event={"ID":"96960c39-6790-47bb-9f2d-9bc3aec15e70","Type":"ContainerDied","Data":"185728c30fc4c050bf42edf8d3163838144b6aa6699c61b2b869c1242312bf28"} Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.248932 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="185728c30fc4c050bf42edf8d3163838144b6aa6699c61b2b869c1242312bf28" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.251210 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-301c-account-create-update-w7p2q" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.261226 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-301c-account-create-update-w7p2q" event={"ID":"86f47086-110a-4d61-a140-ce98aeb0e321","Type":"ContainerDied","Data":"186e10defe703ee53b4674c2cbab0a894b00b3c9e4470baeb77114b668d1ffb5"} Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.261283 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="186e10defe703ee53b4674c2cbab0a894b00b3c9e4470baeb77114b668d1ffb5" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.275513 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-nl2ct" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.275526 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-nl2ct" event={"ID":"3dc961e1-8eef-4fc2-a8da-fd17a08756f8","Type":"ContainerDied","Data":"6b94366681fc582f333cbf015d0a11ad5108b195633649ef7b48b0fa4542f847"} Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.275591 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b94366681fc582f333cbf015d0a11ad5108b195633649ef7b48b0fa4542f847" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.277747 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86f47086-110a-4d61-a140-ce98aeb0e321-operator-scripts\") pod \"86f47086-110a-4d61-a140-ce98aeb0e321\" (UID: \"86f47086-110a-4d61-a140-ce98aeb0e321\") " Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.277845 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac49828-ebf2-410f-8bfd-37f8840d141d-operator-scripts\") pod \"dac49828-ebf2-410f-8bfd-37f8840d141d\" (UID: \"dac49828-ebf2-410f-8bfd-37f8840d141d\") " Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.277913 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwrr8\" (UniqueName: \"kubernetes.io/projected/4d9c0ce2-be7f-447d-b25f-2f4842f3e728-kube-api-access-pwrr8\") pod \"4d9c0ce2-be7f-447d-b25f-2f4842f3e728\" (UID: \"4d9c0ce2-be7f-447d-b25f-2f4842f3e728\") " Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.277936 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96960c39-6790-47bb-9f2d-9bc3aec15e70-operator-scripts\") pod \"96960c39-6790-47bb-9f2d-9bc3aec15e70\" (UID: \"96960c39-6790-47bb-9f2d-9bc3aec15e70\") " Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.277962 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d9c0ce2-be7f-447d-b25f-2f4842f3e728-operator-scripts\") pod \"4d9c0ce2-be7f-447d-b25f-2f4842f3e728\" (UID: \"4d9c0ce2-be7f-447d-b25f-2f4842f3e728\") " Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.278030 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vpmv\" (UniqueName: \"kubernetes.io/projected/86f47086-110a-4d61-a140-ce98aeb0e321-kube-api-access-5vpmv\") pod \"86f47086-110a-4d61-a140-ce98aeb0e321\" (UID: \"86f47086-110a-4d61-a140-ce98aeb0e321\") " Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.278073 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rp5zb\" (UniqueName: \"kubernetes.io/projected/96960c39-6790-47bb-9f2d-9bc3aec15e70-kube-api-access-rp5zb\") pod \"96960c39-6790-47bb-9f2d-9bc3aec15e70\" (UID: \"96960c39-6790-47bb-9f2d-9bc3aec15e70\") " Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.278122 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5x45f\" (UniqueName: \"kubernetes.io/projected/dac49828-ebf2-410f-8bfd-37f8840d141d-kube-api-access-5x45f\") pod \"dac49828-ebf2-410f-8bfd-37f8840d141d\" (UID: \"dac49828-ebf2-410f-8bfd-37f8840d141d\") " Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.278508 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cztqg\" (UniqueName: \"kubernetes.io/projected/13730739-d528-4830-8bad-72e01aa444fa-kube-api-access-cztqg\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.278535 4752 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13730739-d528-4830-8bad-72e01aa444fa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.279097 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96960c39-6790-47bb-9f2d-9bc3aec15e70-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "96960c39-6790-47bb-9f2d-9bc3aec15e70" (UID: "96960c39-6790-47bb-9f2d-9bc3aec15e70"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.279731 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86f47086-110a-4d61-a140-ce98aeb0e321-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "86f47086-110a-4d61-a140-ce98aeb0e321" (UID: "86f47086-110a-4d61-a140-ce98aeb0e321"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.280130 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d9c0ce2-be7f-447d-b25f-2f4842f3e728-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4d9c0ce2-be7f-447d-b25f-2f4842f3e728" (UID: "4d9c0ce2-be7f-447d-b25f-2f4842f3e728"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.281998 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dac49828-ebf2-410f-8bfd-37f8840d141d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dac49828-ebf2-410f-8bfd-37f8840d141d" (UID: "dac49828-ebf2-410f-8bfd-37f8840d141d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.287050 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96960c39-6790-47bb-9f2d-9bc3aec15e70-kube-api-access-rp5zb" (OuterVolumeSpecName: "kube-api-access-rp5zb") pod "96960c39-6790-47bb-9f2d-9bc3aec15e70" (UID: "96960c39-6790-47bb-9f2d-9bc3aec15e70"). InnerVolumeSpecName "kube-api-access-rp5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.288766 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d9c0ce2-be7f-447d-b25f-2f4842f3e728-kube-api-access-pwrr8" (OuterVolumeSpecName: "kube-api-access-pwrr8") pod "4d9c0ce2-be7f-447d-b25f-2f4842f3e728" (UID: "4d9c0ce2-be7f-447d-b25f-2f4842f3e728"). InnerVolumeSpecName "kube-api-access-pwrr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.289514 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9rmfm" event={"ID":"dac49828-ebf2-410f-8bfd-37f8840d141d","Type":"ContainerDied","Data":"125186afa044097002ba8012582f08ba97ec4407200ad48e5a12f1d6742ee21a"} Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.289565 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="125186afa044097002ba8012582f08ba97ec4407200ad48e5a12f1d6742ee21a" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.289634 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9rmfm" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.292219 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86f47086-110a-4d61-a140-ce98aeb0e321-kube-api-access-5vpmv" (OuterVolumeSpecName: "kube-api-access-5vpmv") pod "86f47086-110a-4d61-a140-ce98aeb0e321" (UID: "86f47086-110a-4d61-a140-ce98aeb0e321"). InnerVolumeSpecName "kube-api-access-5vpmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.293969 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dac49828-ebf2-410f-8bfd-37f8840d141d-kube-api-access-5x45f" (OuterVolumeSpecName: "kube-api-access-5x45f") pod "dac49828-ebf2-410f-8bfd-37f8840d141d" (UID: "dac49828-ebf2-410f-8bfd-37f8840d141d"). InnerVolumeSpecName "kube-api-access-5x45f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.295670 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c5fa-account-create-update-w7sxp" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.295667 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c5fa-account-create-update-w7sxp" event={"ID":"13730739-d528-4830-8bad-72e01aa444fa","Type":"ContainerDied","Data":"b26e34c4c72f6fec2d4315616e1277ae9cdc1117480b91e754c4162b63178980"} Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.296278 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b26e34c4c72f6fec2d4315616e1277ae9cdc1117480b91e754c4162b63178980" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.297034 4752 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.297045 4752 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.297530 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pdjvk" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.298027 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pdjvk" event={"ID":"4d9c0ce2-be7f-447d-b25f-2f4842f3e728","Type":"ContainerDied","Data":"7d5b998d8bd4b42d403c15cd271cce1429419a8e379f4f3132f2d709c0c39eae"} Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.298046 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d5b998d8bd4b42d403c15cd271cce1429419a8e379f4f3132f2d709c0c39eae" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.367597 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.367691 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.382701 4752 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac49828-ebf2-410f-8bfd-37f8840d141d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.382739 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwrr8\" (UniqueName: \"kubernetes.io/projected/4d9c0ce2-be7f-447d-b25f-2f4842f3e728-kube-api-access-pwrr8\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.382752 4752 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96960c39-6790-47bb-9f2d-9bc3aec15e70-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.382762 4752 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d9c0ce2-be7f-447d-b25f-2f4842f3e728-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.382772 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vpmv\" (UniqueName: \"kubernetes.io/projected/86f47086-110a-4d61-a140-ce98aeb0e321-kube-api-access-5vpmv\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.382781 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rp5zb\" (UniqueName: \"kubernetes.io/projected/96960c39-6790-47bb-9f2d-9bc3aec15e70-kube-api-access-rp5zb\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.382790 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5x45f\" (UniqueName: \"kubernetes.io/projected/dac49828-ebf2-410f-8bfd-37f8840d141d-kube-api-access-5x45f\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:56 crc kubenswrapper[4752]: I0122 10:45:56.382802 4752 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86f47086-110a-4d61-a140-ce98aeb0e321-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:45:57 crc kubenswrapper[4752]: I0122 10:45:57.307874 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-301c-account-create-update-w7p2q" Jan 22 10:45:57 crc kubenswrapper[4752]: I0122 10:45:57.307882 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fb42-account-create-update-gmwjz" Jan 22 10:45:57 crc kubenswrapper[4752]: I0122 10:45:57.723275 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:45:57 crc kubenswrapper[4752]: I0122 10:45:57.723321 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:45:58 crc kubenswrapper[4752]: I0122 10:45:58.903750 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jjn85"] Jan 22 10:45:58 crc kubenswrapper[4752]: E0122 10:45:58.904466 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13730739-d528-4830-8bad-72e01aa444fa" containerName="mariadb-account-create-update" Jan 22 10:45:58 crc kubenswrapper[4752]: I0122 10:45:58.904482 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="13730739-d528-4830-8bad-72e01aa444fa" containerName="mariadb-account-create-update" Jan 22 10:45:58 crc kubenswrapper[4752]: E0122 10:45:58.904491 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dac49828-ebf2-410f-8bfd-37f8840d141d" containerName="mariadb-database-create" Jan 22 10:45:58 crc kubenswrapper[4752]: I0122 10:45:58.904499 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="dac49828-ebf2-410f-8bfd-37f8840d141d" containerName="mariadb-database-create" Jan 22 10:45:58 crc kubenswrapper[4752]: E0122 10:45:58.904518 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96960c39-6790-47bb-9f2d-9bc3aec15e70" containerName="mariadb-account-create-update" Jan 22 10:45:58 crc kubenswrapper[4752]: I0122 10:45:58.904526 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="96960c39-6790-47bb-9f2d-9bc3aec15e70" containerName="mariadb-account-create-update" Jan 22 10:45:58 crc kubenswrapper[4752]: E0122 10:45:58.904538 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d9c0ce2-be7f-447d-b25f-2f4842f3e728" containerName="mariadb-database-create" Jan 22 10:45:58 crc kubenswrapper[4752]: I0122 10:45:58.904547 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d9c0ce2-be7f-447d-b25f-2f4842f3e728" containerName="mariadb-database-create" Jan 22 10:45:58 crc kubenswrapper[4752]: E0122 10:45:58.904561 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dc961e1-8eef-4fc2-a8da-fd17a08756f8" containerName="mariadb-database-create" Jan 22 10:45:58 crc kubenswrapper[4752]: I0122 10:45:58.904568 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dc961e1-8eef-4fc2-a8da-fd17a08756f8" containerName="mariadb-database-create" Jan 22 10:45:58 crc kubenswrapper[4752]: E0122 10:45:58.904587 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ba449ad-098c-4918-9403-750b0c29ee93" containerName="horizon-log" Jan 22 10:45:58 crc kubenswrapper[4752]: I0122 10:45:58.904594 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ba449ad-098c-4918-9403-750b0c29ee93" containerName="horizon-log" Jan 22 10:45:58 crc kubenswrapper[4752]: E0122 10:45:58.904607 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86f47086-110a-4d61-a140-ce98aeb0e321" containerName="mariadb-account-create-update" Jan 22 10:45:58 crc kubenswrapper[4752]: I0122 10:45:58.904614 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="86f47086-110a-4d61-a140-ce98aeb0e321" containerName="mariadb-account-create-update" Jan 22 10:45:58 crc kubenswrapper[4752]: E0122 10:45:58.904625 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ba449ad-098c-4918-9403-750b0c29ee93" containerName="horizon" Jan 22 10:45:58 crc kubenswrapper[4752]: I0122 10:45:58.904632 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ba449ad-098c-4918-9403-750b0c29ee93" containerName="horizon" Jan 22 10:45:58 crc kubenswrapper[4752]: I0122 10:45:58.904829 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d9c0ce2-be7f-447d-b25f-2f4842f3e728" containerName="mariadb-database-create" Jan 22 10:45:58 crc kubenswrapper[4752]: I0122 10:45:58.904868 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="13730739-d528-4830-8bad-72e01aa444fa" containerName="mariadb-account-create-update" Jan 22 10:45:58 crc kubenswrapper[4752]: I0122 10:45:58.904880 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dc961e1-8eef-4fc2-a8da-fd17a08756f8" containerName="mariadb-database-create" Jan 22 10:45:58 crc kubenswrapper[4752]: I0122 10:45:58.904889 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="86f47086-110a-4d61-a140-ce98aeb0e321" containerName="mariadb-account-create-update" Jan 22 10:45:58 crc kubenswrapper[4752]: I0122 10:45:58.904903 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ba449ad-098c-4918-9403-750b0c29ee93" containerName="horizon-log" Jan 22 10:45:58 crc kubenswrapper[4752]: I0122 10:45:58.904913 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ba449ad-098c-4918-9403-750b0c29ee93" containerName="horizon" Jan 22 10:45:58 crc kubenswrapper[4752]: I0122 10:45:58.904929 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="dac49828-ebf2-410f-8bfd-37f8840d141d" containerName="mariadb-database-create" Jan 22 10:45:58 crc kubenswrapper[4752]: I0122 10:45:58.904935 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="96960c39-6790-47bb-9f2d-9bc3aec15e70" containerName="mariadb-account-create-update" Jan 22 10:45:58 crc kubenswrapper[4752]: I0122 10:45:58.905614 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jjn85" Jan 22 10:45:58 crc kubenswrapper[4752]: I0122 10:45:58.908506 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 22 10:45:58 crc kubenswrapper[4752]: I0122 10:45:58.908756 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 22 10:45:58 crc kubenswrapper[4752]: I0122 10:45:58.909372 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-m69ls" Jan 22 10:45:58 crc kubenswrapper[4752]: I0122 10:45:58.938617 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jjn85"] Jan 22 10:45:59 crc kubenswrapper[4752]: I0122 10:45:59.032913 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e16246f9-0755-4513-bd29-c487e9491528-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-jjn85\" (UID: \"e16246f9-0755-4513-bd29-c487e9491528\") " pod="openstack/nova-cell0-conductor-db-sync-jjn85" Jan 22 10:45:59 crc kubenswrapper[4752]: I0122 10:45:59.033094 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e16246f9-0755-4513-bd29-c487e9491528-scripts\") pod \"nova-cell0-conductor-db-sync-jjn85\" (UID: \"e16246f9-0755-4513-bd29-c487e9491528\") " pod="openstack/nova-cell0-conductor-db-sync-jjn85" Jan 22 10:45:59 crc kubenswrapper[4752]: I0122 10:45:59.033175 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f24d\" (UniqueName: \"kubernetes.io/projected/e16246f9-0755-4513-bd29-c487e9491528-kube-api-access-2f24d\") pod \"nova-cell0-conductor-db-sync-jjn85\" (UID: \"e16246f9-0755-4513-bd29-c487e9491528\") " pod="openstack/nova-cell0-conductor-db-sync-jjn85" Jan 22 10:45:59 crc kubenswrapper[4752]: I0122 10:45:59.033222 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e16246f9-0755-4513-bd29-c487e9491528-config-data\") pod \"nova-cell0-conductor-db-sync-jjn85\" (UID: \"e16246f9-0755-4513-bd29-c487e9491528\") " pod="openstack/nova-cell0-conductor-db-sync-jjn85" Jan 22 10:45:59 crc kubenswrapper[4752]: I0122 10:45:59.135498 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e16246f9-0755-4513-bd29-c487e9491528-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-jjn85\" (UID: \"e16246f9-0755-4513-bd29-c487e9491528\") " pod="openstack/nova-cell0-conductor-db-sync-jjn85" Jan 22 10:45:59 crc kubenswrapper[4752]: I0122 10:45:59.135715 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e16246f9-0755-4513-bd29-c487e9491528-scripts\") pod \"nova-cell0-conductor-db-sync-jjn85\" (UID: \"e16246f9-0755-4513-bd29-c487e9491528\") " pod="openstack/nova-cell0-conductor-db-sync-jjn85" Jan 22 10:45:59 crc kubenswrapper[4752]: I0122 10:45:59.135784 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f24d\" (UniqueName: \"kubernetes.io/projected/e16246f9-0755-4513-bd29-c487e9491528-kube-api-access-2f24d\") pod \"nova-cell0-conductor-db-sync-jjn85\" (UID: \"e16246f9-0755-4513-bd29-c487e9491528\") " pod="openstack/nova-cell0-conductor-db-sync-jjn85" Jan 22 10:45:59 crc kubenswrapper[4752]: I0122 10:45:59.135843 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e16246f9-0755-4513-bd29-c487e9491528-config-data\") pod \"nova-cell0-conductor-db-sync-jjn85\" (UID: \"e16246f9-0755-4513-bd29-c487e9491528\") " pod="openstack/nova-cell0-conductor-db-sync-jjn85" Jan 22 10:45:59 crc kubenswrapper[4752]: I0122 10:45:59.144025 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e16246f9-0755-4513-bd29-c487e9491528-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-jjn85\" (UID: \"e16246f9-0755-4513-bd29-c487e9491528\") " pod="openstack/nova-cell0-conductor-db-sync-jjn85" Jan 22 10:45:59 crc kubenswrapper[4752]: I0122 10:45:59.149300 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e16246f9-0755-4513-bd29-c487e9491528-scripts\") pod \"nova-cell0-conductor-db-sync-jjn85\" (UID: \"e16246f9-0755-4513-bd29-c487e9491528\") " pod="openstack/nova-cell0-conductor-db-sync-jjn85" Jan 22 10:45:59 crc kubenswrapper[4752]: I0122 10:45:59.169581 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e16246f9-0755-4513-bd29-c487e9491528-config-data\") pod \"nova-cell0-conductor-db-sync-jjn85\" (UID: \"e16246f9-0755-4513-bd29-c487e9491528\") " pod="openstack/nova-cell0-conductor-db-sync-jjn85" Jan 22 10:45:59 crc kubenswrapper[4752]: I0122 10:45:59.171338 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f24d\" (UniqueName: \"kubernetes.io/projected/e16246f9-0755-4513-bd29-c487e9491528-kube-api-access-2f24d\") pod \"nova-cell0-conductor-db-sync-jjn85\" (UID: \"e16246f9-0755-4513-bd29-c487e9491528\") " pod="openstack/nova-cell0-conductor-db-sync-jjn85" Jan 22 10:45:59 crc kubenswrapper[4752]: I0122 10:45:59.227266 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jjn85" Jan 22 10:45:59 crc kubenswrapper[4752]: I0122 10:45:59.735495 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jjn85"] Jan 22 10:46:00 crc kubenswrapper[4752]: I0122 10:46:00.349730 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jjn85" event={"ID":"e16246f9-0755-4513-bd29-c487e9491528","Type":"ContainerStarted","Data":"6ddfb2a635ede3ecb91ac3ab0c330ae88e1b440efc3864ee985bb73dcf2656ac"} Jan 22 10:46:00 crc kubenswrapper[4752]: I0122 10:46:00.919011 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 22 10:46:00 crc kubenswrapper[4752]: I0122 10:46:00.950241 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 22 10:46:01 crc kubenswrapper[4752]: I0122 10:46:01.361011 4752 generic.go:334] "Generic (PLEG): container finished" podID="28e65413-9bab-4f8f-8cc4-15d80597fa3c" containerID="59f4abc7c47799056a58e9e7248cf318cb65c55960ada283aa95448f3961194e" exitCode=0 Jan 22 10:46:01 crc kubenswrapper[4752]: I0122 10:46:01.362483 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28e65413-9bab-4f8f-8cc4-15d80597fa3c","Type":"ContainerDied","Data":"59f4abc7c47799056a58e9e7248cf318cb65c55960ada283aa95448f3961194e"} Jan 22 10:46:01 crc kubenswrapper[4752]: I0122 10:46:01.363022 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 22 10:46:01 crc kubenswrapper[4752]: I0122 10:46:01.409972 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 22 10:46:01 crc kubenswrapper[4752]: I0122 10:46:01.464282 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 22 10:46:02 crc kubenswrapper[4752]: I0122 10:46:02.375308 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28e65413-9bab-4f8f-8cc4-15d80597fa3c","Type":"ContainerDied","Data":"655acc7db549dbb47e1d299a66a340ac6746c35a78941750aca4ef959c939fdb"} Jan 22 10:46:02 crc kubenswrapper[4752]: I0122 10:46:02.375609 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="655acc7db549dbb47e1d299a66a340ac6746c35a78941750aca4ef959c939fdb" Jan 22 10:46:02 crc kubenswrapper[4752]: I0122 10:46:02.399376 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:46:02 crc kubenswrapper[4752]: I0122 10:46:02.512013 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28e65413-9bab-4f8f-8cc4-15d80597fa3c-combined-ca-bundle\") pod \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\" (UID: \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\") " Jan 22 10:46:02 crc kubenswrapper[4752]: I0122 10:46:02.512058 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrvgn\" (UniqueName: \"kubernetes.io/projected/28e65413-9bab-4f8f-8cc4-15d80597fa3c-kube-api-access-vrvgn\") pod \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\" (UID: \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\") " Jan 22 10:46:02 crc kubenswrapper[4752]: I0122 10:46:02.512077 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28e65413-9bab-4f8f-8cc4-15d80597fa3c-scripts\") pod \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\" (UID: \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\") " Jan 22 10:46:02 crc kubenswrapper[4752]: I0122 10:46:02.512113 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28e65413-9bab-4f8f-8cc4-15d80597fa3c-sg-core-conf-yaml\") pod \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\" (UID: \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\") " Jan 22 10:46:02 crc kubenswrapper[4752]: I0122 10:46:02.512196 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28e65413-9bab-4f8f-8cc4-15d80597fa3c-config-data\") pod \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\" (UID: \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\") " Jan 22 10:46:02 crc kubenswrapper[4752]: I0122 10:46:02.512216 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28e65413-9bab-4f8f-8cc4-15d80597fa3c-log-httpd\") pod \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\" (UID: \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\") " Jan 22 10:46:02 crc kubenswrapper[4752]: I0122 10:46:02.512274 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28e65413-9bab-4f8f-8cc4-15d80597fa3c-run-httpd\") pod \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\" (UID: \"28e65413-9bab-4f8f-8cc4-15d80597fa3c\") " Jan 22 10:46:02 crc kubenswrapper[4752]: I0122 10:46:02.513994 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28e65413-9bab-4f8f-8cc4-15d80597fa3c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "28e65413-9bab-4f8f-8cc4-15d80597fa3c" (UID: "28e65413-9bab-4f8f-8cc4-15d80597fa3c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:46:02 crc kubenswrapper[4752]: I0122 10:46:02.514105 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28e65413-9bab-4f8f-8cc4-15d80597fa3c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "28e65413-9bab-4f8f-8cc4-15d80597fa3c" (UID: "28e65413-9bab-4f8f-8cc4-15d80597fa3c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:46:02 crc kubenswrapper[4752]: I0122 10:46:02.519680 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28e65413-9bab-4f8f-8cc4-15d80597fa3c-scripts" (OuterVolumeSpecName: "scripts") pod "28e65413-9bab-4f8f-8cc4-15d80597fa3c" (UID: "28e65413-9bab-4f8f-8cc4-15d80597fa3c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:46:02 crc kubenswrapper[4752]: I0122 10:46:02.535978 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28e65413-9bab-4f8f-8cc4-15d80597fa3c-kube-api-access-vrvgn" (OuterVolumeSpecName: "kube-api-access-vrvgn") pod "28e65413-9bab-4f8f-8cc4-15d80597fa3c" (UID: "28e65413-9bab-4f8f-8cc4-15d80597fa3c"). InnerVolumeSpecName "kube-api-access-vrvgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:46:02 crc kubenswrapper[4752]: I0122 10:46:02.579330 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28e65413-9bab-4f8f-8cc4-15d80597fa3c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "28e65413-9bab-4f8f-8cc4-15d80597fa3c" (UID: "28e65413-9bab-4f8f-8cc4-15d80597fa3c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:46:02 crc kubenswrapper[4752]: I0122 10:46:02.609050 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28e65413-9bab-4f8f-8cc4-15d80597fa3c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28e65413-9bab-4f8f-8cc4-15d80597fa3c" (UID: "28e65413-9bab-4f8f-8cc4-15d80597fa3c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:46:02 crc kubenswrapper[4752]: I0122 10:46:02.614450 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28e65413-9bab-4f8f-8cc4-15d80597fa3c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:02 crc kubenswrapper[4752]: I0122 10:46:02.614475 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrvgn\" (UniqueName: \"kubernetes.io/projected/28e65413-9bab-4f8f-8cc4-15d80597fa3c-kube-api-access-vrvgn\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:02 crc kubenswrapper[4752]: I0122 10:46:02.614487 4752 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28e65413-9bab-4f8f-8cc4-15d80597fa3c-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:02 crc kubenswrapper[4752]: I0122 10:46:02.614495 4752 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28e65413-9bab-4f8f-8cc4-15d80597fa3c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:02 crc kubenswrapper[4752]: I0122 10:46:02.614503 4752 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28e65413-9bab-4f8f-8cc4-15d80597fa3c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:02 crc kubenswrapper[4752]: I0122 10:46:02.614511 4752 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28e65413-9bab-4f8f-8cc4-15d80597fa3c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:02 crc kubenswrapper[4752]: I0122 10:46:02.626812 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28e65413-9bab-4f8f-8cc4-15d80597fa3c-config-data" (OuterVolumeSpecName: "config-data") pod "28e65413-9bab-4f8f-8cc4-15d80597fa3c" (UID: "28e65413-9bab-4f8f-8cc4-15d80597fa3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:46:02 crc kubenswrapper[4752]: I0122 10:46:02.716706 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28e65413-9bab-4f8f-8cc4-15d80597fa3c-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.382602 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.382648 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="806e176d-686f-4523-822c-f519f6a6076d" containerName="watcher-decision-engine" containerID="cri-o://81d429a6da09dc2d168c69b6baf52b7cb15c263cdc9d8e028ff413c25c3fa9d8" gracePeriod=30 Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.419365 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.437333 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.448980 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:46:03 crc kubenswrapper[4752]: E0122 10:46:03.449485 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28e65413-9bab-4f8f-8cc4-15d80597fa3c" containerName="proxy-httpd" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.449501 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="28e65413-9bab-4f8f-8cc4-15d80597fa3c" containerName="proxy-httpd" Jan 22 10:46:03 crc kubenswrapper[4752]: E0122 10:46:03.449546 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28e65413-9bab-4f8f-8cc4-15d80597fa3c" containerName="sg-core" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.449554 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="28e65413-9bab-4f8f-8cc4-15d80597fa3c" containerName="sg-core" Jan 22 10:46:03 crc kubenswrapper[4752]: E0122 10:46:03.449570 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28e65413-9bab-4f8f-8cc4-15d80597fa3c" containerName="ceilometer-central-agent" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.449581 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="28e65413-9bab-4f8f-8cc4-15d80597fa3c" containerName="ceilometer-central-agent" Jan 22 10:46:03 crc kubenswrapper[4752]: E0122 10:46:03.449596 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28e65413-9bab-4f8f-8cc4-15d80597fa3c" containerName="ceilometer-notification-agent" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.449605 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="28e65413-9bab-4f8f-8cc4-15d80597fa3c" containerName="ceilometer-notification-agent" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.449898 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="28e65413-9bab-4f8f-8cc4-15d80597fa3c" containerName="ceilometer-central-agent" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.449921 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="28e65413-9bab-4f8f-8cc4-15d80597fa3c" containerName="proxy-httpd" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.449933 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="28e65413-9bab-4f8f-8cc4-15d80597fa3c" containerName="sg-core" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.449945 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="28e65413-9bab-4f8f-8cc4-15d80597fa3c" containerName="ceilometer-notification-agent" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.456978 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.459559 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.459821 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.476467 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.530124 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c60f18-700d-4ad0-a572-d3ffd1d4efca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\") " pod="openstack/ceilometer-0" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.530183 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42c60f18-700d-4ad0-a572-d3ffd1d4efca-scripts\") pod \"ceilometer-0\" (UID: \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\") " pod="openstack/ceilometer-0" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.530230 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42c60f18-700d-4ad0-a572-d3ffd1d4efca-log-httpd\") pod \"ceilometer-0\" (UID: \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\") " pod="openstack/ceilometer-0" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.530288 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/42c60f18-700d-4ad0-a572-d3ffd1d4efca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\") " pod="openstack/ceilometer-0" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.530313 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42c60f18-700d-4ad0-a572-d3ffd1d4efca-config-data\") pod \"ceilometer-0\" (UID: \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\") " pod="openstack/ceilometer-0" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.530353 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42c60f18-700d-4ad0-a572-d3ffd1d4efca-run-httpd\") pod \"ceilometer-0\" (UID: \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\") " pod="openstack/ceilometer-0" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.530377 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkbxb\" (UniqueName: \"kubernetes.io/projected/42c60f18-700d-4ad0-a572-d3ffd1d4efca-kube-api-access-mkbxb\") pod \"ceilometer-0\" (UID: \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\") " pod="openstack/ceilometer-0" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.634530 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c60f18-700d-4ad0-a572-d3ffd1d4efca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\") " pod="openstack/ceilometer-0" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.634600 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42c60f18-700d-4ad0-a572-d3ffd1d4efca-scripts\") pod \"ceilometer-0\" (UID: \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\") " pod="openstack/ceilometer-0" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.634661 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42c60f18-700d-4ad0-a572-d3ffd1d4efca-log-httpd\") pod \"ceilometer-0\" (UID: \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\") " pod="openstack/ceilometer-0" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.634716 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/42c60f18-700d-4ad0-a572-d3ffd1d4efca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\") " pod="openstack/ceilometer-0" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.634753 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42c60f18-700d-4ad0-a572-d3ffd1d4efca-config-data\") pod \"ceilometer-0\" (UID: \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\") " pod="openstack/ceilometer-0" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.634819 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42c60f18-700d-4ad0-a572-d3ffd1d4efca-run-httpd\") pod \"ceilometer-0\" (UID: \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\") " pod="openstack/ceilometer-0" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.634909 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkbxb\" (UniqueName: \"kubernetes.io/projected/42c60f18-700d-4ad0-a572-d3ffd1d4efca-kube-api-access-mkbxb\") pod \"ceilometer-0\" (UID: \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\") " pod="openstack/ceilometer-0" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.635293 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42c60f18-700d-4ad0-a572-d3ffd1d4efca-log-httpd\") pod \"ceilometer-0\" (UID: \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\") " pod="openstack/ceilometer-0" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.635552 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42c60f18-700d-4ad0-a572-d3ffd1d4efca-run-httpd\") pod \"ceilometer-0\" (UID: \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\") " pod="openstack/ceilometer-0" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.639107 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42c60f18-700d-4ad0-a572-d3ffd1d4efca-config-data\") pod \"ceilometer-0\" (UID: \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\") " pod="openstack/ceilometer-0" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.640100 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c60f18-700d-4ad0-a572-d3ffd1d4efca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\") " pod="openstack/ceilometer-0" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.642000 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42c60f18-700d-4ad0-a572-d3ffd1d4efca-scripts\") pod \"ceilometer-0\" (UID: \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\") " pod="openstack/ceilometer-0" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.642343 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/42c60f18-700d-4ad0-a572-d3ffd1d4efca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\") " pod="openstack/ceilometer-0" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.653039 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkbxb\" (UniqueName: \"kubernetes.io/projected/42c60f18-700d-4ad0-a572-d3ffd1d4efca-kube-api-access-mkbxb\") pod \"ceilometer-0\" (UID: \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\") " pod="openstack/ceilometer-0" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.780761 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:46:03 crc kubenswrapper[4752]: I0122 10:46:03.966170 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:46:05 crc kubenswrapper[4752]: I0122 10:46:05.112561 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28e65413-9bab-4f8f-8cc4-15d80597fa3c" path="/var/lib/kubelet/pods/28e65413-9bab-4f8f-8cc4-15d80597fa3c/volumes" Jan 22 10:46:07 crc kubenswrapper[4752]: I0122 10:46:07.433301 4752 generic.go:334] "Generic (PLEG): container finished" podID="806e176d-686f-4523-822c-f519f6a6076d" containerID="81d429a6da09dc2d168c69b6baf52b7cb15c263cdc9d8e028ff413c25c3fa9d8" exitCode=0 Jan 22 10:46:07 crc kubenswrapper[4752]: I0122 10:46:07.433394 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"806e176d-686f-4523-822c-f519f6a6076d","Type":"ContainerDied","Data":"81d429a6da09dc2d168c69b6baf52b7cb15c263cdc9d8e028ff413c25c3fa9d8"} Jan 22 10:46:07 crc kubenswrapper[4752]: I0122 10:46:07.433606 4752 scope.go:117] "RemoveContainer" containerID="a716c6a764ce27358ef039d89ad40c367c143fe72cf879f9e4ce33e32e847757" Jan 22 10:46:09 crc kubenswrapper[4752]: I0122 10:46:09.747639 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 22 10:46:09 crc kubenswrapper[4752]: I0122 10:46:09.846496 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:46:09 crc kubenswrapper[4752]: I0122 10:46:09.870700 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/806e176d-686f-4523-822c-f519f6a6076d-config-data\") pod \"806e176d-686f-4523-822c-f519f6a6076d\" (UID: \"806e176d-686f-4523-822c-f519f6a6076d\") " Jan 22 10:46:09 crc kubenswrapper[4752]: I0122 10:46:09.870789 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8g8ng\" (UniqueName: \"kubernetes.io/projected/806e176d-686f-4523-822c-f519f6a6076d-kube-api-access-8g8ng\") pod \"806e176d-686f-4523-822c-f519f6a6076d\" (UID: \"806e176d-686f-4523-822c-f519f6a6076d\") " Jan 22 10:46:09 crc kubenswrapper[4752]: I0122 10:46:09.870880 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/806e176d-686f-4523-822c-f519f6a6076d-logs\") pod \"806e176d-686f-4523-822c-f519f6a6076d\" (UID: \"806e176d-686f-4523-822c-f519f6a6076d\") " Jan 22 10:46:09 crc kubenswrapper[4752]: I0122 10:46:09.871033 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/806e176d-686f-4523-822c-f519f6a6076d-custom-prometheus-ca\") pod \"806e176d-686f-4523-822c-f519f6a6076d\" (UID: \"806e176d-686f-4523-822c-f519f6a6076d\") " Jan 22 10:46:09 crc kubenswrapper[4752]: I0122 10:46:09.871117 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/806e176d-686f-4523-822c-f519f6a6076d-combined-ca-bundle\") pod \"806e176d-686f-4523-822c-f519f6a6076d\" (UID: \"806e176d-686f-4523-822c-f519f6a6076d\") " Jan 22 10:46:09 crc kubenswrapper[4752]: I0122 10:46:09.871479 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/806e176d-686f-4523-822c-f519f6a6076d-logs" (OuterVolumeSpecName: "logs") pod "806e176d-686f-4523-822c-f519f6a6076d" (UID: "806e176d-686f-4523-822c-f519f6a6076d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:46:09 crc kubenswrapper[4752]: I0122 10:46:09.871934 4752 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/806e176d-686f-4523-822c-f519f6a6076d-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:09 crc kubenswrapper[4752]: I0122 10:46:09.882108 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/806e176d-686f-4523-822c-f519f6a6076d-kube-api-access-8g8ng" (OuterVolumeSpecName: "kube-api-access-8g8ng") pod "806e176d-686f-4523-822c-f519f6a6076d" (UID: "806e176d-686f-4523-822c-f519f6a6076d"). InnerVolumeSpecName "kube-api-access-8g8ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:46:09 crc kubenswrapper[4752]: I0122 10:46:09.903845 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/806e176d-686f-4523-822c-f519f6a6076d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "806e176d-686f-4523-822c-f519f6a6076d" (UID: "806e176d-686f-4523-822c-f519f6a6076d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:46:09 crc kubenswrapper[4752]: I0122 10:46:09.907681 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/806e176d-686f-4523-822c-f519f6a6076d-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "806e176d-686f-4523-822c-f519f6a6076d" (UID: "806e176d-686f-4523-822c-f519f6a6076d"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:46:09 crc kubenswrapper[4752]: I0122 10:46:09.935260 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/806e176d-686f-4523-822c-f519f6a6076d-config-data" (OuterVolumeSpecName: "config-data") pod "806e176d-686f-4523-822c-f519f6a6076d" (UID: "806e176d-686f-4523-822c-f519f6a6076d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:46:09 crc kubenswrapper[4752]: I0122 10:46:09.973791 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/806e176d-686f-4523-822c-f519f6a6076d-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:09 crc kubenswrapper[4752]: I0122 10:46:09.973832 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8g8ng\" (UniqueName: \"kubernetes.io/projected/806e176d-686f-4523-822c-f519f6a6076d-kube-api-access-8g8ng\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:09 crc kubenswrapper[4752]: I0122 10:46:09.973847 4752 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/806e176d-686f-4523-822c-f519f6a6076d-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:09 crc kubenswrapper[4752]: I0122 10:46:09.973879 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/806e176d-686f-4523-822c-f519f6a6076d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.469083 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42c60f18-700d-4ad0-a572-d3ffd1d4efca","Type":"ContainerStarted","Data":"295ca1ec9a93bb837f5c4c0d3f467c004511b8875462b794647a48f8d857d4ca"} Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.469473 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42c60f18-700d-4ad0-a572-d3ffd1d4efca","Type":"ContainerStarted","Data":"d500f432a632710060d7da588ac2d60dc96d091d2f41dcfac967559364b94194"} Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.469485 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42c60f18-700d-4ad0-a572-d3ffd1d4efca","Type":"ContainerStarted","Data":"f40f624e0f4af6af98bf8d065097b48d98b656d19668f32c16d9066e4a3726b1"} Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.475551 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"806e176d-686f-4523-822c-f519f6a6076d","Type":"ContainerDied","Data":"a4778b5a517ac80cf1d0856c02a21949158ed3f84ceea47452a6724fe23639b9"} Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.475596 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.475601 4752 scope.go:117] "RemoveContainer" containerID="81d429a6da09dc2d168c69b6baf52b7cb15c263cdc9d8e028ff413c25c3fa9d8" Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.477627 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jjn85" event={"ID":"e16246f9-0755-4513-bd29-c487e9491528","Type":"ContainerStarted","Data":"2120b782fee42af709256ddb90e98e7d7ab1d962510b5987967446d307b05f81"} Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.514232 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-jjn85" podStartSLOduration=2.62347272 podStartE2EDuration="12.514206515s" podCreationTimestamp="2026-01-22 10:45:58 +0000 UTC" firstStartedPulling="2026-01-22 10:45:59.740665769 +0000 UTC m=+1238.970608677" lastFinishedPulling="2026-01-22 10:46:09.631399564 +0000 UTC m=+1248.861342472" observedRunningTime="2026-01-22 10:46:10.503494351 +0000 UTC m=+1249.733437259" watchObservedRunningTime="2026-01-22 10:46:10.514206515 +0000 UTC m=+1249.744149423" Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.531754 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.546689 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.560210 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 22 10:46:10 crc kubenswrapper[4752]: E0122 10:46:10.560706 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="806e176d-686f-4523-822c-f519f6a6076d" containerName="watcher-decision-engine" Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.560726 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="806e176d-686f-4523-822c-f519f6a6076d" containerName="watcher-decision-engine" Jan 22 10:46:10 crc kubenswrapper[4752]: E0122 10:46:10.560751 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="806e176d-686f-4523-822c-f519f6a6076d" containerName="watcher-decision-engine" Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.560759 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="806e176d-686f-4523-822c-f519f6a6076d" containerName="watcher-decision-engine" Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.561043 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="806e176d-686f-4523-822c-f519f6a6076d" containerName="watcher-decision-engine" Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.561070 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="806e176d-686f-4523-822c-f519f6a6076d" containerName="watcher-decision-engine" Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.561079 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="806e176d-686f-4523-822c-f519f6a6076d" containerName="watcher-decision-engine" Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.561857 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.565501 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.569021 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.690607 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a85aed4e-0281-4d53-bccd-121bb379e016-logs\") pod \"watcher-decision-engine-0\" (UID: \"a85aed4e-0281-4d53-bccd-121bb379e016\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.691050 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a85aed4e-0281-4d53-bccd-121bb379e016-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"a85aed4e-0281-4d53-bccd-121bb379e016\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.691236 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j65sd\" (UniqueName: \"kubernetes.io/projected/a85aed4e-0281-4d53-bccd-121bb379e016-kube-api-access-j65sd\") pod \"watcher-decision-engine-0\" (UID: \"a85aed4e-0281-4d53-bccd-121bb379e016\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.691337 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a85aed4e-0281-4d53-bccd-121bb379e016-config-data\") pod \"watcher-decision-engine-0\" (UID: \"a85aed4e-0281-4d53-bccd-121bb379e016\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.691385 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a85aed4e-0281-4d53-bccd-121bb379e016-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"a85aed4e-0281-4d53-bccd-121bb379e016\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.794208 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a85aed4e-0281-4d53-bccd-121bb379e016-config-data\") pod \"watcher-decision-engine-0\" (UID: \"a85aed4e-0281-4d53-bccd-121bb379e016\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.794300 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a85aed4e-0281-4d53-bccd-121bb379e016-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"a85aed4e-0281-4d53-bccd-121bb379e016\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.794388 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a85aed4e-0281-4d53-bccd-121bb379e016-logs\") pod \"watcher-decision-engine-0\" (UID: \"a85aed4e-0281-4d53-bccd-121bb379e016\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.794586 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a85aed4e-0281-4d53-bccd-121bb379e016-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"a85aed4e-0281-4d53-bccd-121bb379e016\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.794714 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j65sd\" (UniqueName: \"kubernetes.io/projected/a85aed4e-0281-4d53-bccd-121bb379e016-kube-api-access-j65sd\") pod \"watcher-decision-engine-0\" (UID: \"a85aed4e-0281-4d53-bccd-121bb379e016\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.794893 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a85aed4e-0281-4d53-bccd-121bb379e016-logs\") pod \"watcher-decision-engine-0\" (UID: \"a85aed4e-0281-4d53-bccd-121bb379e016\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.801772 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a85aed4e-0281-4d53-bccd-121bb379e016-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"a85aed4e-0281-4d53-bccd-121bb379e016\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.802063 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a85aed4e-0281-4d53-bccd-121bb379e016-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"a85aed4e-0281-4d53-bccd-121bb379e016\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.816960 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j65sd\" (UniqueName: \"kubernetes.io/projected/a85aed4e-0281-4d53-bccd-121bb379e016-kube-api-access-j65sd\") pod \"watcher-decision-engine-0\" (UID: \"a85aed4e-0281-4d53-bccd-121bb379e016\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.826397 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a85aed4e-0281-4d53-bccd-121bb379e016-config-data\") pod \"watcher-decision-engine-0\" (UID: \"a85aed4e-0281-4d53-bccd-121bb379e016\") " pod="openstack/watcher-decision-engine-0" Jan 22 10:46:10 crc kubenswrapper[4752]: I0122 10:46:10.892363 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 22 10:46:11 crc kubenswrapper[4752]: I0122 10:46:11.159831 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="806e176d-686f-4523-822c-f519f6a6076d" path="/var/lib/kubelet/pods/806e176d-686f-4523-822c-f519f6a6076d/volumes" Jan 22 10:46:11 crc kubenswrapper[4752]: I0122 10:46:11.457817 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 22 10:46:11 crc kubenswrapper[4752]: W0122 10:46:11.464676 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda85aed4e_0281_4d53_bccd_121bb379e016.slice/crio-1dcb3baaf60ed9a262e1807c6bd40294770ae6ba158fcc2e5014419a843ea41b WatchSource:0}: Error finding container 1dcb3baaf60ed9a262e1807c6bd40294770ae6ba158fcc2e5014419a843ea41b: Status 404 returned error can't find the container with id 1dcb3baaf60ed9a262e1807c6bd40294770ae6ba158fcc2e5014419a843ea41b Jan 22 10:46:11 crc kubenswrapper[4752]: I0122 10:46:11.491074 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"a85aed4e-0281-4d53-bccd-121bb379e016","Type":"ContainerStarted","Data":"1dcb3baaf60ed9a262e1807c6bd40294770ae6ba158fcc2e5014419a843ea41b"} Jan 22 10:46:11 crc kubenswrapper[4752]: I0122 10:46:11.498361 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42c60f18-700d-4ad0-a572-d3ffd1d4efca","Type":"ContainerStarted","Data":"0725a4d59901244bdca7c724f4de75d1ad722e50f09ab98391716e1e29ad875f"} Jan 22 10:46:15 crc kubenswrapper[4752]: I0122 10:46:15.545376 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"a85aed4e-0281-4d53-bccd-121bb379e016","Type":"ContainerStarted","Data":"581e04d0ad40df4bd2a6444aad6b6afabcfea993762475488f15994fc86587d0"} Jan 22 10:46:15 crc kubenswrapper[4752]: I0122 10:46:15.568275 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=5.568256595 podStartE2EDuration="5.568256595s" podCreationTimestamp="2026-01-22 10:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:46:15.562950269 +0000 UTC m=+1254.792893177" watchObservedRunningTime="2026-01-22 10:46:15.568256595 +0000 UTC m=+1254.798199503" Jan 22 10:46:16 crc kubenswrapper[4752]: I0122 10:46:16.556402 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42c60f18-700d-4ad0-a572-d3ffd1d4efca","Type":"ContainerStarted","Data":"36eba2f76b80211c7c207325aa262a574ded78d42d3152a5e452ce3bfd04a4b7"} Jan 22 10:46:16 crc kubenswrapper[4752]: I0122 10:46:16.556645 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="42c60f18-700d-4ad0-a572-d3ffd1d4efca" containerName="sg-core" containerID="cri-o://0725a4d59901244bdca7c724f4de75d1ad722e50f09ab98391716e1e29ad875f" gracePeriod=30 Jan 22 10:46:16 crc kubenswrapper[4752]: I0122 10:46:16.556696 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="42c60f18-700d-4ad0-a572-d3ffd1d4efca" containerName="proxy-httpd" containerID="cri-o://36eba2f76b80211c7c207325aa262a574ded78d42d3152a5e452ce3bfd04a4b7" gracePeriod=30 Jan 22 10:46:16 crc kubenswrapper[4752]: I0122 10:46:16.556725 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="42c60f18-700d-4ad0-a572-d3ffd1d4efca" containerName="ceilometer-central-agent" containerID="cri-o://d500f432a632710060d7da588ac2d60dc96d091d2f41dcfac967559364b94194" gracePeriod=30 Jan 22 10:46:16 crc kubenswrapper[4752]: I0122 10:46:16.556691 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="42c60f18-700d-4ad0-a572-d3ffd1d4efca" containerName="ceilometer-notification-agent" containerID="cri-o://295ca1ec9a93bb837f5c4c0d3f467c004511b8875462b794647a48f8d857d4ca" gracePeriod=30 Jan 22 10:46:16 crc kubenswrapper[4752]: I0122 10:46:16.587170 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=7.945098045 podStartE2EDuration="13.587150729s" podCreationTimestamp="2026-01-22 10:46:03 +0000 UTC" firstStartedPulling="2026-01-22 10:46:09.866577135 +0000 UTC m=+1249.096520043" lastFinishedPulling="2026-01-22 10:46:15.508629819 +0000 UTC m=+1254.738572727" observedRunningTime="2026-01-22 10:46:16.579216246 +0000 UTC m=+1255.809159164" watchObservedRunningTime="2026-01-22 10:46:16.587150729 +0000 UTC m=+1255.817093637" Jan 22 10:46:17 crc kubenswrapper[4752]: I0122 10:46:17.567182 4752 generic.go:334] "Generic (PLEG): container finished" podID="42c60f18-700d-4ad0-a572-d3ffd1d4efca" containerID="36eba2f76b80211c7c207325aa262a574ded78d42d3152a5e452ce3bfd04a4b7" exitCode=0 Jan 22 10:46:17 crc kubenswrapper[4752]: I0122 10:46:17.567518 4752 generic.go:334] "Generic (PLEG): container finished" podID="42c60f18-700d-4ad0-a572-d3ffd1d4efca" containerID="0725a4d59901244bdca7c724f4de75d1ad722e50f09ab98391716e1e29ad875f" exitCode=2 Jan 22 10:46:17 crc kubenswrapper[4752]: I0122 10:46:17.567529 4752 generic.go:334] "Generic (PLEG): container finished" podID="42c60f18-700d-4ad0-a572-d3ffd1d4efca" containerID="295ca1ec9a93bb837f5c4c0d3f467c004511b8875462b794647a48f8d857d4ca" exitCode=0 Jan 22 10:46:17 crc kubenswrapper[4752]: I0122 10:46:17.567239 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42c60f18-700d-4ad0-a572-d3ffd1d4efca","Type":"ContainerDied","Data":"36eba2f76b80211c7c207325aa262a574ded78d42d3152a5e452ce3bfd04a4b7"} Jan 22 10:46:17 crc kubenswrapper[4752]: I0122 10:46:17.567560 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42c60f18-700d-4ad0-a572-d3ffd1d4efca","Type":"ContainerDied","Data":"0725a4d59901244bdca7c724f4de75d1ad722e50f09ab98391716e1e29ad875f"} Jan 22 10:46:17 crc kubenswrapper[4752]: I0122 10:46:17.567571 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42c60f18-700d-4ad0-a572-d3ffd1d4efca","Type":"ContainerDied","Data":"295ca1ec9a93bb837f5c4c0d3f467c004511b8875462b794647a48f8d857d4ca"} Jan 22 10:46:20 crc kubenswrapper[4752]: I0122 10:46:20.893579 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 22 10:46:20 crc kubenswrapper[4752]: I0122 10:46:20.921517 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 22 10:46:21 crc kubenswrapper[4752]: I0122 10:46:21.622744 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 22 10:46:21 crc kubenswrapper[4752]: I0122 10:46:21.652172 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.421053 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.522437 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42c60f18-700d-4ad0-a572-d3ffd1d4efca-run-httpd\") pod \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\" (UID: \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\") " Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.522620 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42c60f18-700d-4ad0-a572-d3ffd1d4efca-log-httpd\") pod \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\" (UID: \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\") " Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.522783 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/42c60f18-700d-4ad0-a572-d3ffd1d4efca-sg-core-conf-yaml\") pod \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\" (UID: \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\") " Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.522911 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42c60f18-700d-4ad0-a572-d3ffd1d4efca-config-data\") pod \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\" (UID: \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\") " Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.522975 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c60f18-700d-4ad0-a572-d3ffd1d4efca-combined-ca-bundle\") pod \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\" (UID: \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\") " Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.523064 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkbxb\" (UniqueName: \"kubernetes.io/projected/42c60f18-700d-4ad0-a572-d3ffd1d4efca-kube-api-access-mkbxb\") pod \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\" (UID: \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\") " Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.523131 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42c60f18-700d-4ad0-a572-d3ffd1d4efca-scripts\") pod \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\" (UID: \"42c60f18-700d-4ad0-a572-d3ffd1d4efca\") " Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.523323 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42c60f18-700d-4ad0-a572-d3ffd1d4efca-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "42c60f18-700d-4ad0-a572-d3ffd1d4efca" (UID: "42c60f18-700d-4ad0-a572-d3ffd1d4efca"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.523778 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42c60f18-700d-4ad0-a572-d3ffd1d4efca-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "42c60f18-700d-4ad0-a572-d3ffd1d4efca" (UID: "42c60f18-700d-4ad0-a572-d3ffd1d4efca"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.523991 4752 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42c60f18-700d-4ad0-a572-d3ffd1d4efca-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.524026 4752 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42c60f18-700d-4ad0-a572-d3ffd1d4efca-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.531225 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c60f18-700d-4ad0-a572-d3ffd1d4efca-scripts" (OuterVolumeSpecName: "scripts") pod "42c60f18-700d-4ad0-a572-d3ffd1d4efca" (UID: "42c60f18-700d-4ad0-a572-d3ffd1d4efca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.531358 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42c60f18-700d-4ad0-a572-d3ffd1d4efca-kube-api-access-mkbxb" (OuterVolumeSpecName: "kube-api-access-mkbxb") pod "42c60f18-700d-4ad0-a572-d3ffd1d4efca" (UID: "42c60f18-700d-4ad0-a572-d3ffd1d4efca"). InnerVolumeSpecName "kube-api-access-mkbxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.577996 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c60f18-700d-4ad0-a572-d3ffd1d4efca-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "42c60f18-700d-4ad0-a572-d3ffd1d4efca" (UID: "42c60f18-700d-4ad0-a572-d3ffd1d4efca"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.608018 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c60f18-700d-4ad0-a572-d3ffd1d4efca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42c60f18-700d-4ad0-a572-d3ffd1d4efca" (UID: "42c60f18-700d-4ad0-a572-d3ffd1d4efca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.625182 4752 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/42c60f18-700d-4ad0-a572-d3ffd1d4efca-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.625217 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c60f18-700d-4ad0-a572-d3ffd1d4efca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.625227 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkbxb\" (UniqueName: \"kubernetes.io/projected/42c60f18-700d-4ad0-a572-d3ffd1d4efca-kube-api-access-mkbxb\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.625238 4752 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42c60f18-700d-4ad0-a572-d3ffd1d4efca-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.637095 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c60f18-700d-4ad0-a572-d3ffd1d4efca-config-data" (OuterVolumeSpecName: "config-data") pod "42c60f18-700d-4ad0-a572-d3ffd1d4efca" (UID: "42c60f18-700d-4ad0-a572-d3ffd1d4efca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.666955 4752 generic.go:334] "Generic (PLEG): container finished" podID="42c60f18-700d-4ad0-a572-d3ffd1d4efca" containerID="d500f432a632710060d7da588ac2d60dc96d091d2f41dcfac967559364b94194" exitCode=0 Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.667019 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.667100 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42c60f18-700d-4ad0-a572-d3ffd1d4efca","Type":"ContainerDied","Data":"d500f432a632710060d7da588ac2d60dc96d091d2f41dcfac967559364b94194"} Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.667157 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42c60f18-700d-4ad0-a572-d3ffd1d4efca","Type":"ContainerDied","Data":"f40f624e0f4af6af98bf8d065097b48d98b656d19668f32c16d9066e4a3726b1"} Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.667181 4752 scope.go:117] "RemoveContainer" containerID="36eba2f76b80211c7c207325aa262a574ded78d42d3152a5e452ce3bfd04a4b7" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.706405 4752 scope.go:117] "RemoveContainer" containerID="0725a4d59901244bdca7c724f4de75d1ad722e50f09ab98391716e1e29ad875f" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.728179 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42c60f18-700d-4ad0-a572-d3ffd1d4efca-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.728232 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.733210 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.739160 4752 scope.go:117] "RemoveContainer" containerID="295ca1ec9a93bb837f5c4c0d3f467c004511b8875462b794647a48f8d857d4ca" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.756415 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:46:24 crc kubenswrapper[4752]: E0122 10:46:24.756885 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c60f18-700d-4ad0-a572-d3ffd1d4efca" containerName="ceilometer-notification-agent" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.756901 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c60f18-700d-4ad0-a572-d3ffd1d4efca" containerName="ceilometer-notification-agent" Jan 22 10:46:24 crc kubenswrapper[4752]: E0122 10:46:24.756918 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="806e176d-686f-4523-822c-f519f6a6076d" containerName="watcher-decision-engine" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.756926 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="806e176d-686f-4523-822c-f519f6a6076d" containerName="watcher-decision-engine" Jan 22 10:46:24 crc kubenswrapper[4752]: E0122 10:46:24.756937 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c60f18-700d-4ad0-a572-d3ffd1d4efca" containerName="sg-core" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.756946 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c60f18-700d-4ad0-a572-d3ffd1d4efca" containerName="sg-core" Jan 22 10:46:24 crc kubenswrapper[4752]: E0122 10:46:24.756976 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c60f18-700d-4ad0-a572-d3ffd1d4efca" containerName="proxy-httpd" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.756984 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c60f18-700d-4ad0-a572-d3ffd1d4efca" containerName="proxy-httpd" Jan 22 10:46:24 crc kubenswrapper[4752]: E0122 10:46:24.757006 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c60f18-700d-4ad0-a572-d3ffd1d4efca" containerName="ceilometer-central-agent" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.757014 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c60f18-700d-4ad0-a572-d3ffd1d4efca" containerName="ceilometer-central-agent" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.757193 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="42c60f18-700d-4ad0-a572-d3ffd1d4efca" containerName="proxy-httpd" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.757211 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="806e176d-686f-4523-822c-f519f6a6076d" containerName="watcher-decision-engine" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.757224 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="42c60f18-700d-4ad0-a572-d3ffd1d4efca" containerName="ceilometer-central-agent" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.757243 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="42c60f18-700d-4ad0-a572-d3ffd1d4efca" containerName="sg-core" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.757253 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="42c60f18-700d-4ad0-a572-d3ffd1d4efca" containerName="ceilometer-notification-agent" Jan 22 10:46:24 crc kubenswrapper[4752]: E0122 10:46:24.757417 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="806e176d-686f-4523-822c-f519f6a6076d" containerName="watcher-decision-engine" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.757425 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="806e176d-686f-4523-822c-f519f6a6076d" containerName="watcher-decision-engine" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.758902 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.762557 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.762764 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.792003 4752 scope.go:117] "RemoveContainer" containerID="d500f432a632710060d7da588ac2d60dc96d091d2f41dcfac967559364b94194" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.794849 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.827117 4752 scope.go:117] "RemoveContainer" containerID="36eba2f76b80211c7c207325aa262a574ded78d42d3152a5e452ce3bfd04a4b7" Jan 22 10:46:24 crc kubenswrapper[4752]: E0122 10:46:24.832009 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36eba2f76b80211c7c207325aa262a574ded78d42d3152a5e452ce3bfd04a4b7\": container with ID starting with 36eba2f76b80211c7c207325aa262a574ded78d42d3152a5e452ce3bfd04a4b7 not found: ID does not exist" containerID="36eba2f76b80211c7c207325aa262a574ded78d42d3152a5e452ce3bfd04a4b7" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.832067 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36eba2f76b80211c7c207325aa262a574ded78d42d3152a5e452ce3bfd04a4b7"} err="failed to get container status \"36eba2f76b80211c7c207325aa262a574ded78d42d3152a5e452ce3bfd04a4b7\": rpc error: code = NotFound desc = could not find container \"36eba2f76b80211c7c207325aa262a574ded78d42d3152a5e452ce3bfd04a4b7\": container with ID starting with 36eba2f76b80211c7c207325aa262a574ded78d42d3152a5e452ce3bfd04a4b7 not found: ID does not exist" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.832099 4752 scope.go:117] "RemoveContainer" containerID="0725a4d59901244bdca7c724f4de75d1ad722e50f09ab98391716e1e29ad875f" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.834133 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02c4deb0-67e3-49a9-bc07-cb2568391afa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"02c4deb0-67e3-49a9-bc07-cb2568391afa\") " pod="openstack/ceilometer-0" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.834216 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jg5p\" (UniqueName: \"kubernetes.io/projected/02c4deb0-67e3-49a9-bc07-cb2568391afa-kube-api-access-5jg5p\") pod \"ceilometer-0\" (UID: \"02c4deb0-67e3-49a9-bc07-cb2568391afa\") " pod="openstack/ceilometer-0" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.834270 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02c4deb0-67e3-49a9-bc07-cb2568391afa-config-data\") pod \"ceilometer-0\" (UID: \"02c4deb0-67e3-49a9-bc07-cb2568391afa\") " pod="openstack/ceilometer-0" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.834292 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02c4deb0-67e3-49a9-bc07-cb2568391afa-scripts\") pod \"ceilometer-0\" (UID: \"02c4deb0-67e3-49a9-bc07-cb2568391afa\") " pod="openstack/ceilometer-0" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.834320 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/02c4deb0-67e3-49a9-bc07-cb2568391afa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"02c4deb0-67e3-49a9-bc07-cb2568391afa\") " pod="openstack/ceilometer-0" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.834350 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02c4deb0-67e3-49a9-bc07-cb2568391afa-run-httpd\") pod \"ceilometer-0\" (UID: \"02c4deb0-67e3-49a9-bc07-cb2568391afa\") " pod="openstack/ceilometer-0" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.834380 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02c4deb0-67e3-49a9-bc07-cb2568391afa-log-httpd\") pod \"ceilometer-0\" (UID: \"02c4deb0-67e3-49a9-bc07-cb2568391afa\") " pod="openstack/ceilometer-0" Jan 22 10:46:24 crc kubenswrapper[4752]: E0122 10:46:24.834616 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0725a4d59901244bdca7c724f4de75d1ad722e50f09ab98391716e1e29ad875f\": container with ID starting with 0725a4d59901244bdca7c724f4de75d1ad722e50f09ab98391716e1e29ad875f not found: ID does not exist" containerID="0725a4d59901244bdca7c724f4de75d1ad722e50f09ab98391716e1e29ad875f" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.834643 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0725a4d59901244bdca7c724f4de75d1ad722e50f09ab98391716e1e29ad875f"} err="failed to get container status \"0725a4d59901244bdca7c724f4de75d1ad722e50f09ab98391716e1e29ad875f\": rpc error: code = NotFound desc = could not find container \"0725a4d59901244bdca7c724f4de75d1ad722e50f09ab98391716e1e29ad875f\": container with ID starting with 0725a4d59901244bdca7c724f4de75d1ad722e50f09ab98391716e1e29ad875f not found: ID does not exist" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.834669 4752 scope.go:117] "RemoveContainer" containerID="295ca1ec9a93bb837f5c4c0d3f467c004511b8875462b794647a48f8d857d4ca" Jan 22 10:46:24 crc kubenswrapper[4752]: E0122 10:46:24.835046 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"295ca1ec9a93bb837f5c4c0d3f467c004511b8875462b794647a48f8d857d4ca\": container with ID starting with 295ca1ec9a93bb837f5c4c0d3f467c004511b8875462b794647a48f8d857d4ca not found: ID does not exist" containerID="295ca1ec9a93bb837f5c4c0d3f467c004511b8875462b794647a48f8d857d4ca" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.835068 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"295ca1ec9a93bb837f5c4c0d3f467c004511b8875462b794647a48f8d857d4ca"} err="failed to get container status \"295ca1ec9a93bb837f5c4c0d3f467c004511b8875462b794647a48f8d857d4ca\": rpc error: code = NotFound desc = could not find container \"295ca1ec9a93bb837f5c4c0d3f467c004511b8875462b794647a48f8d857d4ca\": container with ID starting with 295ca1ec9a93bb837f5c4c0d3f467c004511b8875462b794647a48f8d857d4ca not found: ID does not exist" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.835084 4752 scope.go:117] "RemoveContainer" containerID="d500f432a632710060d7da588ac2d60dc96d091d2f41dcfac967559364b94194" Jan 22 10:46:24 crc kubenswrapper[4752]: E0122 10:46:24.835318 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d500f432a632710060d7da588ac2d60dc96d091d2f41dcfac967559364b94194\": container with ID starting with d500f432a632710060d7da588ac2d60dc96d091d2f41dcfac967559364b94194 not found: ID does not exist" containerID="d500f432a632710060d7da588ac2d60dc96d091d2f41dcfac967559364b94194" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.835336 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d500f432a632710060d7da588ac2d60dc96d091d2f41dcfac967559364b94194"} err="failed to get container status \"d500f432a632710060d7da588ac2d60dc96d091d2f41dcfac967559364b94194\": rpc error: code = NotFound desc = could not find container \"d500f432a632710060d7da588ac2d60dc96d091d2f41dcfac967559364b94194\": container with ID starting with d500f432a632710060d7da588ac2d60dc96d091d2f41dcfac967559364b94194 not found: ID does not exist" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.935665 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02c4deb0-67e3-49a9-bc07-cb2568391afa-run-httpd\") pod \"ceilometer-0\" (UID: \"02c4deb0-67e3-49a9-bc07-cb2568391afa\") " pod="openstack/ceilometer-0" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.935746 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02c4deb0-67e3-49a9-bc07-cb2568391afa-log-httpd\") pod \"ceilometer-0\" (UID: \"02c4deb0-67e3-49a9-bc07-cb2568391afa\") " pod="openstack/ceilometer-0" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.935843 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02c4deb0-67e3-49a9-bc07-cb2568391afa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"02c4deb0-67e3-49a9-bc07-cb2568391afa\") " pod="openstack/ceilometer-0" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.935928 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jg5p\" (UniqueName: \"kubernetes.io/projected/02c4deb0-67e3-49a9-bc07-cb2568391afa-kube-api-access-5jg5p\") pod \"ceilometer-0\" (UID: \"02c4deb0-67e3-49a9-bc07-cb2568391afa\") " pod="openstack/ceilometer-0" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.935986 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02c4deb0-67e3-49a9-bc07-cb2568391afa-config-data\") pod \"ceilometer-0\" (UID: \"02c4deb0-67e3-49a9-bc07-cb2568391afa\") " pod="openstack/ceilometer-0" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.936010 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02c4deb0-67e3-49a9-bc07-cb2568391afa-scripts\") pod \"ceilometer-0\" (UID: \"02c4deb0-67e3-49a9-bc07-cb2568391afa\") " pod="openstack/ceilometer-0" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.936044 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/02c4deb0-67e3-49a9-bc07-cb2568391afa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"02c4deb0-67e3-49a9-bc07-cb2568391afa\") " pod="openstack/ceilometer-0" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.936473 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02c4deb0-67e3-49a9-bc07-cb2568391afa-run-httpd\") pod \"ceilometer-0\" (UID: \"02c4deb0-67e3-49a9-bc07-cb2568391afa\") " pod="openstack/ceilometer-0" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.936604 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02c4deb0-67e3-49a9-bc07-cb2568391afa-log-httpd\") pod \"ceilometer-0\" (UID: \"02c4deb0-67e3-49a9-bc07-cb2568391afa\") " pod="openstack/ceilometer-0" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.944181 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/02c4deb0-67e3-49a9-bc07-cb2568391afa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"02c4deb0-67e3-49a9-bc07-cb2568391afa\") " pod="openstack/ceilometer-0" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.944349 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02c4deb0-67e3-49a9-bc07-cb2568391afa-config-data\") pod \"ceilometer-0\" (UID: \"02c4deb0-67e3-49a9-bc07-cb2568391afa\") " pod="openstack/ceilometer-0" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.944404 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02c4deb0-67e3-49a9-bc07-cb2568391afa-scripts\") pod \"ceilometer-0\" (UID: \"02c4deb0-67e3-49a9-bc07-cb2568391afa\") " pod="openstack/ceilometer-0" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.945560 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02c4deb0-67e3-49a9-bc07-cb2568391afa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"02c4deb0-67e3-49a9-bc07-cb2568391afa\") " pod="openstack/ceilometer-0" Jan 22 10:46:24 crc kubenswrapper[4752]: I0122 10:46:24.964870 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jg5p\" (UniqueName: \"kubernetes.io/projected/02c4deb0-67e3-49a9-bc07-cb2568391afa-kube-api-access-5jg5p\") pod \"ceilometer-0\" (UID: \"02c4deb0-67e3-49a9-bc07-cb2568391afa\") " pod="openstack/ceilometer-0" Jan 22 10:46:25 crc kubenswrapper[4752]: I0122 10:46:25.077579 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:46:25 crc kubenswrapper[4752]: I0122 10:46:25.111173 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42c60f18-700d-4ad0-a572-d3ffd1d4efca" path="/var/lib/kubelet/pods/42c60f18-700d-4ad0-a572-d3ffd1d4efca/volumes" Jan 22 10:46:25 crc kubenswrapper[4752]: I0122 10:46:25.622755 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:46:25 crc kubenswrapper[4752]: W0122 10:46:25.623295 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02c4deb0_67e3_49a9_bc07_cb2568391afa.slice/crio-ff2da6f4e133a81df3511530ac39dd8c2b5abfc0ec5d62399eeeea6861a50f82 WatchSource:0}: Error finding container ff2da6f4e133a81df3511530ac39dd8c2b5abfc0ec5d62399eeeea6861a50f82: Status 404 returned error can't find the container with id ff2da6f4e133a81df3511530ac39dd8c2b5abfc0ec5d62399eeeea6861a50f82 Jan 22 10:46:25 crc kubenswrapper[4752]: I0122 10:46:25.677326 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02c4deb0-67e3-49a9-bc07-cb2568391afa","Type":"ContainerStarted","Data":"ff2da6f4e133a81df3511530ac39dd8c2b5abfc0ec5d62399eeeea6861a50f82"} Jan 22 10:46:26 crc kubenswrapper[4752]: I0122 10:46:26.688576 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02c4deb0-67e3-49a9-bc07-cb2568391afa","Type":"ContainerStarted","Data":"d9f8f5f242de15dfcba03067ad0754e4e62e7f6451ca4344eafe7095c319c859"} Jan 22 10:46:27 crc kubenswrapper[4752]: I0122 10:46:27.706740 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02c4deb0-67e3-49a9-bc07-cb2568391afa","Type":"ContainerStarted","Data":"729b56b77f3fa7748aad94430602e76cbd7508a00d1856fc3204a9ee91f17ad9"} Jan 22 10:46:27 crc kubenswrapper[4752]: I0122 10:46:27.723395 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:46:27 crc kubenswrapper[4752]: I0122 10:46:27.723446 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:46:27 crc kubenswrapper[4752]: I0122 10:46:27.723483 4752 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 10:46:27 crc kubenswrapper[4752]: I0122 10:46:27.724323 4752 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"98fa078dac5ca30a46bf92bf45d8fc8b321a6f93f3d0f79aa40474301ba963e0"} pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 10:46:27 crc kubenswrapper[4752]: I0122 10:46:27.724377 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" containerID="cri-o://98fa078dac5ca30a46bf92bf45d8fc8b321a6f93f3d0f79aa40474301ba963e0" gracePeriod=600 Jan 22 10:46:28 crc kubenswrapper[4752]: I0122 10:46:28.709378 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:46:28 crc kubenswrapper[4752]: I0122 10:46:28.719659 4752 generic.go:334] "Generic (PLEG): container finished" podID="eb8df70c-9474-4827-8831-f39fc6883d79" containerID="98fa078dac5ca30a46bf92bf45d8fc8b321a6f93f3d0f79aa40474301ba963e0" exitCode=0 Jan 22 10:46:28 crc kubenswrapper[4752]: I0122 10:46:28.719743 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerDied","Data":"98fa078dac5ca30a46bf92bf45d8fc8b321a6f93f3d0f79aa40474301ba963e0"} Jan 22 10:46:28 crc kubenswrapper[4752]: I0122 10:46:28.719793 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerStarted","Data":"6f52f6d18c41bfe8f36bddff272721d0bfb8924dfca328885232f7fbfd3ac21f"} Jan 22 10:46:28 crc kubenswrapper[4752]: I0122 10:46:28.719812 4752 scope.go:117] "RemoveContainer" containerID="2e87e3a6ca557c47aa1a29b28c97952e66d28f228a2a925e37de3714e751682c" Jan 22 10:46:28 crc kubenswrapper[4752]: I0122 10:46:28.722654 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02c4deb0-67e3-49a9-bc07-cb2568391afa","Type":"ContainerStarted","Data":"a4bcea95d1a12eb014d30be3968a75a505fd81e3788a70d4f95f85905f3d78c2"} Jan 22 10:46:29 crc kubenswrapper[4752]: I0122 10:46:29.739421 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02c4deb0-67e3-49a9-bc07-cb2568391afa","Type":"ContainerStarted","Data":"21455171f6f49596118e55d84a094f7eb6fde1a0a7bd45cda9f12c402ca47489"} Jan 22 10:46:29 crc kubenswrapper[4752]: I0122 10:46:29.740027 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 10:46:29 crc kubenswrapper[4752]: I0122 10:46:29.739767 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="02c4deb0-67e3-49a9-bc07-cb2568391afa" containerName="sg-core" containerID="cri-o://a4bcea95d1a12eb014d30be3968a75a505fd81e3788a70d4f95f85905f3d78c2" gracePeriod=30 Jan 22 10:46:29 crc kubenswrapper[4752]: I0122 10:46:29.739757 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="02c4deb0-67e3-49a9-bc07-cb2568391afa" containerName="ceilometer-central-agent" containerID="cri-o://d9f8f5f242de15dfcba03067ad0754e4e62e7f6451ca4344eafe7095c319c859" gracePeriod=30 Jan 22 10:46:29 crc kubenswrapper[4752]: I0122 10:46:29.739789 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="02c4deb0-67e3-49a9-bc07-cb2568391afa" containerName="proxy-httpd" containerID="cri-o://21455171f6f49596118e55d84a094f7eb6fde1a0a7bd45cda9f12c402ca47489" gracePeriod=30 Jan 22 10:46:29 crc kubenswrapper[4752]: I0122 10:46:29.739803 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="02c4deb0-67e3-49a9-bc07-cb2568391afa" containerName="ceilometer-notification-agent" containerID="cri-o://729b56b77f3fa7748aad94430602e76cbd7508a00d1856fc3204a9ee91f17ad9" gracePeriod=30 Jan 22 10:46:29 crc kubenswrapper[4752]: I0122 10:46:29.777233 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.399699073 podStartE2EDuration="5.777212311s" podCreationTimestamp="2026-01-22 10:46:24 +0000 UTC" firstStartedPulling="2026-01-22 10:46:25.626846297 +0000 UTC m=+1264.856789205" lastFinishedPulling="2026-01-22 10:46:29.004359535 +0000 UTC m=+1268.234302443" observedRunningTime="2026-01-22 10:46:29.761270893 +0000 UTC m=+1268.991213811" watchObservedRunningTime="2026-01-22 10:46:29.777212311 +0000 UTC m=+1269.007155219" Jan 22 10:46:30 crc kubenswrapper[4752]: I0122 10:46:30.764640 4752 generic.go:334] "Generic (PLEG): container finished" podID="02c4deb0-67e3-49a9-bc07-cb2568391afa" containerID="21455171f6f49596118e55d84a094f7eb6fde1a0a7bd45cda9f12c402ca47489" exitCode=0 Jan 22 10:46:30 crc kubenswrapper[4752]: I0122 10:46:30.765168 4752 generic.go:334] "Generic (PLEG): container finished" podID="02c4deb0-67e3-49a9-bc07-cb2568391afa" containerID="a4bcea95d1a12eb014d30be3968a75a505fd81e3788a70d4f95f85905f3d78c2" exitCode=2 Jan 22 10:46:30 crc kubenswrapper[4752]: I0122 10:46:30.765184 4752 generic.go:334] "Generic (PLEG): container finished" podID="02c4deb0-67e3-49a9-bc07-cb2568391afa" containerID="729b56b77f3fa7748aad94430602e76cbd7508a00d1856fc3204a9ee91f17ad9" exitCode=0 Jan 22 10:46:30 crc kubenswrapper[4752]: I0122 10:46:30.764688 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02c4deb0-67e3-49a9-bc07-cb2568391afa","Type":"ContainerDied","Data":"21455171f6f49596118e55d84a094f7eb6fde1a0a7bd45cda9f12c402ca47489"} Jan 22 10:46:30 crc kubenswrapper[4752]: I0122 10:46:30.765291 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02c4deb0-67e3-49a9-bc07-cb2568391afa","Type":"ContainerDied","Data":"a4bcea95d1a12eb014d30be3968a75a505fd81e3788a70d4f95f85905f3d78c2"} Jan 22 10:46:30 crc kubenswrapper[4752]: I0122 10:46:30.765326 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02c4deb0-67e3-49a9-bc07-cb2568391afa","Type":"ContainerDied","Data":"729b56b77f3fa7748aad94430602e76cbd7508a00d1856fc3204a9ee91f17ad9"} Jan 22 10:46:31 crc kubenswrapper[4752]: I0122 10:46:31.781449 4752 generic.go:334] "Generic (PLEG): container finished" podID="e16246f9-0755-4513-bd29-c487e9491528" containerID="2120b782fee42af709256ddb90e98e7d7ab1d962510b5987967446d307b05f81" exitCode=0 Jan 22 10:46:31 crc kubenswrapper[4752]: I0122 10:46:31.781667 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jjn85" event={"ID":"e16246f9-0755-4513-bd29-c487e9491528","Type":"ContainerDied","Data":"2120b782fee42af709256ddb90e98e7d7ab1d962510b5987967446d307b05f81"} Jan 22 10:46:33 crc kubenswrapper[4752]: I0122 10:46:33.223663 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jjn85" Jan 22 10:46:33 crc kubenswrapper[4752]: I0122 10:46:33.321083 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e16246f9-0755-4513-bd29-c487e9491528-scripts\") pod \"e16246f9-0755-4513-bd29-c487e9491528\" (UID: \"e16246f9-0755-4513-bd29-c487e9491528\") " Jan 22 10:46:33 crc kubenswrapper[4752]: I0122 10:46:33.321159 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e16246f9-0755-4513-bd29-c487e9491528-config-data\") pod \"e16246f9-0755-4513-bd29-c487e9491528\" (UID: \"e16246f9-0755-4513-bd29-c487e9491528\") " Jan 22 10:46:33 crc kubenswrapper[4752]: I0122 10:46:33.321427 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e16246f9-0755-4513-bd29-c487e9491528-combined-ca-bundle\") pod \"e16246f9-0755-4513-bd29-c487e9491528\" (UID: \"e16246f9-0755-4513-bd29-c487e9491528\") " Jan 22 10:46:33 crc kubenswrapper[4752]: I0122 10:46:33.321458 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f24d\" (UniqueName: \"kubernetes.io/projected/e16246f9-0755-4513-bd29-c487e9491528-kube-api-access-2f24d\") pod \"e16246f9-0755-4513-bd29-c487e9491528\" (UID: \"e16246f9-0755-4513-bd29-c487e9491528\") " Jan 22 10:46:33 crc kubenswrapper[4752]: I0122 10:46:33.340406 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e16246f9-0755-4513-bd29-c487e9491528-kube-api-access-2f24d" (OuterVolumeSpecName: "kube-api-access-2f24d") pod "e16246f9-0755-4513-bd29-c487e9491528" (UID: "e16246f9-0755-4513-bd29-c487e9491528"). InnerVolumeSpecName "kube-api-access-2f24d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:46:33 crc kubenswrapper[4752]: I0122 10:46:33.345150 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e16246f9-0755-4513-bd29-c487e9491528-scripts" (OuterVolumeSpecName: "scripts") pod "e16246f9-0755-4513-bd29-c487e9491528" (UID: "e16246f9-0755-4513-bd29-c487e9491528"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:46:33 crc kubenswrapper[4752]: E0122 10:46:33.349154 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e16246f9-0755-4513-bd29-c487e9491528-combined-ca-bundle podName:e16246f9-0755-4513-bd29-c487e9491528 nodeName:}" failed. No retries permitted until 2026-01-22 10:46:33.849131946 +0000 UTC m=+1273.079074854 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/e16246f9-0755-4513-bd29-c487e9491528-combined-ca-bundle") pod "e16246f9-0755-4513-bd29-c487e9491528" (UID: "e16246f9-0755-4513-bd29-c487e9491528") : error deleting /var/lib/kubelet/pods/e16246f9-0755-4513-bd29-c487e9491528/volume-subpaths: remove /var/lib/kubelet/pods/e16246f9-0755-4513-bd29-c487e9491528/volume-subpaths: no such file or directory Jan 22 10:46:33 crc kubenswrapper[4752]: I0122 10:46:33.351730 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e16246f9-0755-4513-bd29-c487e9491528-config-data" (OuterVolumeSpecName: "config-data") pod "e16246f9-0755-4513-bd29-c487e9491528" (UID: "e16246f9-0755-4513-bd29-c487e9491528"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:46:33 crc kubenswrapper[4752]: I0122 10:46:33.423467 4752 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e16246f9-0755-4513-bd29-c487e9491528-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:33 crc kubenswrapper[4752]: I0122 10:46:33.423509 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e16246f9-0755-4513-bd29-c487e9491528-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:33 crc kubenswrapper[4752]: I0122 10:46:33.423518 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2f24d\" (UniqueName: \"kubernetes.io/projected/e16246f9-0755-4513-bd29-c487e9491528-kube-api-access-2f24d\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:33 crc kubenswrapper[4752]: I0122 10:46:33.804880 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jjn85" event={"ID":"e16246f9-0755-4513-bd29-c487e9491528","Type":"ContainerDied","Data":"6ddfb2a635ede3ecb91ac3ab0c330ae88e1b440efc3864ee985bb73dcf2656ac"} Jan 22 10:46:33 crc kubenswrapper[4752]: I0122 10:46:33.805255 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ddfb2a635ede3ecb91ac3ab0c330ae88e1b440efc3864ee985bb73dcf2656ac" Jan 22 10:46:33 crc kubenswrapper[4752]: I0122 10:46:33.804954 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jjn85" Jan 22 10:46:33 crc kubenswrapper[4752]: I0122 10:46:33.901199 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 22 10:46:33 crc kubenswrapper[4752]: E0122 10:46:33.901609 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e16246f9-0755-4513-bd29-c487e9491528" containerName="nova-cell0-conductor-db-sync" Jan 22 10:46:33 crc kubenswrapper[4752]: I0122 10:46:33.901629 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="e16246f9-0755-4513-bd29-c487e9491528" containerName="nova-cell0-conductor-db-sync" Jan 22 10:46:33 crc kubenswrapper[4752]: I0122 10:46:33.901833 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="e16246f9-0755-4513-bd29-c487e9491528" containerName="nova-cell0-conductor-db-sync" Jan 22 10:46:33 crc kubenswrapper[4752]: I0122 10:46:33.902536 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 22 10:46:33 crc kubenswrapper[4752]: I0122 10:46:33.938183 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e16246f9-0755-4513-bd29-c487e9491528-combined-ca-bundle\") pod \"e16246f9-0755-4513-bd29-c487e9491528\" (UID: \"e16246f9-0755-4513-bd29-c487e9491528\") " Jan 22 10:46:33 crc kubenswrapper[4752]: I0122 10:46:33.939077 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 22 10:46:33 crc kubenswrapper[4752]: I0122 10:46:33.962083 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e16246f9-0755-4513-bd29-c487e9491528-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e16246f9-0755-4513-bd29-c487e9491528" (UID: "e16246f9-0755-4513-bd29-c487e9491528"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:46:34 crc kubenswrapper[4752]: I0122 10:46:34.040454 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77b7e4ca-213e-4760-92c8-73696cefdd06-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"77b7e4ca-213e-4760-92c8-73696cefdd06\") " pod="openstack/nova-cell0-conductor-0" Jan 22 10:46:34 crc kubenswrapper[4752]: I0122 10:46:34.040509 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjr9t\" (UniqueName: \"kubernetes.io/projected/77b7e4ca-213e-4760-92c8-73696cefdd06-kube-api-access-gjr9t\") pod \"nova-cell0-conductor-0\" (UID: \"77b7e4ca-213e-4760-92c8-73696cefdd06\") " pod="openstack/nova-cell0-conductor-0" Jan 22 10:46:34 crc kubenswrapper[4752]: I0122 10:46:34.040593 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b7e4ca-213e-4760-92c8-73696cefdd06-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"77b7e4ca-213e-4760-92c8-73696cefdd06\") " pod="openstack/nova-cell0-conductor-0" Jan 22 10:46:34 crc kubenswrapper[4752]: I0122 10:46:34.040800 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e16246f9-0755-4513-bd29-c487e9491528-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:34 crc kubenswrapper[4752]: I0122 10:46:34.142416 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77b7e4ca-213e-4760-92c8-73696cefdd06-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"77b7e4ca-213e-4760-92c8-73696cefdd06\") " pod="openstack/nova-cell0-conductor-0" Jan 22 10:46:34 crc kubenswrapper[4752]: I0122 10:46:34.142767 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjr9t\" (UniqueName: \"kubernetes.io/projected/77b7e4ca-213e-4760-92c8-73696cefdd06-kube-api-access-gjr9t\") pod \"nova-cell0-conductor-0\" (UID: \"77b7e4ca-213e-4760-92c8-73696cefdd06\") " pod="openstack/nova-cell0-conductor-0" Jan 22 10:46:34 crc kubenswrapper[4752]: I0122 10:46:34.142960 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b7e4ca-213e-4760-92c8-73696cefdd06-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"77b7e4ca-213e-4760-92c8-73696cefdd06\") " pod="openstack/nova-cell0-conductor-0" Jan 22 10:46:34 crc kubenswrapper[4752]: I0122 10:46:34.149313 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b7e4ca-213e-4760-92c8-73696cefdd06-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"77b7e4ca-213e-4760-92c8-73696cefdd06\") " pod="openstack/nova-cell0-conductor-0" Jan 22 10:46:34 crc kubenswrapper[4752]: I0122 10:46:34.149341 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77b7e4ca-213e-4760-92c8-73696cefdd06-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"77b7e4ca-213e-4760-92c8-73696cefdd06\") " pod="openstack/nova-cell0-conductor-0" Jan 22 10:46:34 crc kubenswrapper[4752]: I0122 10:46:34.168780 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjr9t\" (UniqueName: \"kubernetes.io/projected/77b7e4ca-213e-4760-92c8-73696cefdd06-kube-api-access-gjr9t\") pod \"nova-cell0-conductor-0\" (UID: \"77b7e4ca-213e-4760-92c8-73696cefdd06\") " pod="openstack/nova-cell0-conductor-0" Jan 22 10:46:34 crc kubenswrapper[4752]: I0122 10:46:34.313400 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 22 10:46:34 crc kubenswrapper[4752]: I0122 10:46:34.807544 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 22 10:46:34 crc kubenswrapper[4752]: I0122 10:46:34.817101 4752 generic.go:334] "Generic (PLEG): container finished" podID="02c4deb0-67e3-49a9-bc07-cb2568391afa" containerID="d9f8f5f242de15dfcba03067ad0754e4e62e7f6451ca4344eafe7095c319c859" exitCode=0 Jan 22 10:46:34 crc kubenswrapper[4752]: I0122 10:46:34.817149 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02c4deb0-67e3-49a9-bc07-cb2568391afa","Type":"ContainerDied","Data":"d9f8f5f242de15dfcba03067ad0754e4e62e7f6451ca4344eafe7095c319c859"} Jan 22 10:46:35 crc kubenswrapper[4752]: I0122 10:46:35.770849 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:46:35 crc kubenswrapper[4752]: I0122 10:46:35.830763 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02c4deb0-67e3-49a9-bc07-cb2568391afa","Type":"ContainerDied","Data":"ff2da6f4e133a81df3511530ac39dd8c2b5abfc0ec5d62399eeeea6861a50f82"} Jan 22 10:46:35 crc kubenswrapper[4752]: I0122 10:46:35.830788 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:46:35 crc kubenswrapper[4752]: I0122 10:46:35.830824 4752 scope.go:117] "RemoveContainer" containerID="21455171f6f49596118e55d84a094f7eb6fde1a0a7bd45cda9f12c402ca47489" Jan 22 10:46:35 crc kubenswrapper[4752]: I0122 10:46:35.832625 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"77b7e4ca-213e-4760-92c8-73696cefdd06","Type":"ContainerStarted","Data":"32578427dccb6f6ea59649bfe32500a1945d588eec0f335ff4f81e0a4e08d3d5"} Jan 22 10:46:35 crc kubenswrapper[4752]: I0122 10:46:35.832721 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"77b7e4ca-213e-4760-92c8-73696cefdd06","Type":"ContainerStarted","Data":"1a3ebdb23ce2f9fa6e2404dda3786fd17c86c9e20246a262b8ba9d80204ae563"} Jan 22 10:46:35 crc kubenswrapper[4752]: I0122 10:46:35.832759 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 22 10:46:35 crc kubenswrapper[4752]: I0122 10:46:35.853223 4752 scope.go:117] "RemoveContainer" containerID="a4bcea95d1a12eb014d30be3968a75a505fd81e3788a70d4f95f85905f3d78c2" Jan 22 10:46:35 crc kubenswrapper[4752]: I0122 10:46:35.861750 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.861728321 podStartE2EDuration="2.861728321s" podCreationTimestamp="2026-01-22 10:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:46:35.854960268 +0000 UTC m=+1275.084903186" watchObservedRunningTime="2026-01-22 10:46:35.861728321 +0000 UTC m=+1275.091671229" Jan 22 10:46:35 crc kubenswrapper[4752]: I0122 10:46:35.875905 4752 scope.go:117] "RemoveContainer" containerID="729b56b77f3fa7748aad94430602e76cbd7508a00d1856fc3204a9ee91f17ad9" Jan 22 10:46:35 crc kubenswrapper[4752]: I0122 10:46:35.887741 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02c4deb0-67e3-49a9-bc07-cb2568391afa-combined-ca-bundle\") pod \"02c4deb0-67e3-49a9-bc07-cb2568391afa\" (UID: \"02c4deb0-67e3-49a9-bc07-cb2568391afa\") " Jan 22 10:46:35 crc kubenswrapper[4752]: I0122 10:46:35.888022 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02c4deb0-67e3-49a9-bc07-cb2568391afa-scripts\") pod \"02c4deb0-67e3-49a9-bc07-cb2568391afa\" (UID: \"02c4deb0-67e3-49a9-bc07-cb2568391afa\") " Jan 22 10:46:35 crc kubenswrapper[4752]: I0122 10:46:35.888126 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02c4deb0-67e3-49a9-bc07-cb2568391afa-run-httpd\") pod \"02c4deb0-67e3-49a9-bc07-cb2568391afa\" (UID: \"02c4deb0-67e3-49a9-bc07-cb2568391afa\") " Jan 22 10:46:35 crc kubenswrapper[4752]: I0122 10:46:35.888247 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02c4deb0-67e3-49a9-bc07-cb2568391afa-config-data\") pod \"02c4deb0-67e3-49a9-bc07-cb2568391afa\" (UID: \"02c4deb0-67e3-49a9-bc07-cb2568391afa\") " Jan 22 10:46:35 crc kubenswrapper[4752]: I0122 10:46:35.888412 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jg5p\" (UniqueName: \"kubernetes.io/projected/02c4deb0-67e3-49a9-bc07-cb2568391afa-kube-api-access-5jg5p\") pod \"02c4deb0-67e3-49a9-bc07-cb2568391afa\" (UID: \"02c4deb0-67e3-49a9-bc07-cb2568391afa\") " Jan 22 10:46:35 crc kubenswrapper[4752]: I0122 10:46:35.888535 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/02c4deb0-67e3-49a9-bc07-cb2568391afa-sg-core-conf-yaml\") pod \"02c4deb0-67e3-49a9-bc07-cb2568391afa\" (UID: \"02c4deb0-67e3-49a9-bc07-cb2568391afa\") " Jan 22 10:46:35 crc kubenswrapper[4752]: I0122 10:46:35.888651 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02c4deb0-67e3-49a9-bc07-cb2568391afa-log-httpd\") pod \"02c4deb0-67e3-49a9-bc07-cb2568391afa\" (UID: \"02c4deb0-67e3-49a9-bc07-cb2568391afa\") " Jan 22 10:46:35 crc kubenswrapper[4752]: I0122 10:46:35.889894 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02c4deb0-67e3-49a9-bc07-cb2568391afa-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "02c4deb0-67e3-49a9-bc07-cb2568391afa" (UID: "02c4deb0-67e3-49a9-bc07-cb2568391afa"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:46:35 crc kubenswrapper[4752]: I0122 10:46:35.890731 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02c4deb0-67e3-49a9-bc07-cb2568391afa-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "02c4deb0-67e3-49a9-bc07-cb2568391afa" (UID: "02c4deb0-67e3-49a9-bc07-cb2568391afa"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:46:35 crc kubenswrapper[4752]: I0122 10:46:35.907108 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02c4deb0-67e3-49a9-bc07-cb2568391afa-scripts" (OuterVolumeSpecName: "scripts") pod "02c4deb0-67e3-49a9-bc07-cb2568391afa" (UID: "02c4deb0-67e3-49a9-bc07-cb2568391afa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:46:35 crc kubenswrapper[4752]: I0122 10:46:35.907144 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02c4deb0-67e3-49a9-bc07-cb2568391afa-kube-api-access-5jg5p" (OuterVolumeSpecName: "kube-api-access-5jg5p") pod "02c4deb0-67e3-49a9-bc07-cb2568391afa" (UID: "02c4deb0-67e3-49a9-bc07-cb2568391afa"). InnerVolumeSpecName "kube-api-access-5jg5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:46:35 crc kubenswrapper[4752]: I0122 10:46:35.910342 4752 scope.go:117] "RemoveContainer" containerID="d9f8f5f242de15dfcba03067ad0754e4e62e7f6451ca4344eafe7095c319c859" Jan 22 10:46:35 crc kubenswrapper[4752]: I0122 10:46:35.925086 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02c4deb0-67e3-49a9-bc07-cb2568391afa-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "02c4deb0-67e3-49a9-bc07-cb2568391afa" (UID: "02c4deb0-67e3-49a9-bc07-cb2568391afa"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:46:35 crc kubenswrapper[4752]: I0122 10:46:35.984117 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02c4deb0-67e3-49a9-bc07-cb2568391afa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02c4deb0-67e3-49a9-bc07-cb2568391afa" (UID: "02c4deb0-67e3-49a9-bc07-cb2568391afa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:46:35 crc kubenswrapper[4752]: I0122 10:46:35.990762 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jg5p\" (UniqueName: \"kubernetes.io/projected/02c4deb0-67e3-49a9-bc07-cb2568391afa-kube-api-access-5jg5p\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:35 crc kubenswrapper[4752]: I0122 10:46:35.990967 4752 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/02c4deb0-67e3-49a9-bc07-cb2568391afa-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:35 crc kubenswrapper[4752]: I0122 10:46:35.991062 4752 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02c4deb0-67e3-49a9-bc07-cb2568391afa-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:35 crc kubenswrapper[4752]: I0122 10:46:35.991152 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02c4deb0-67e3-49a9-bc07-cb2568391afa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:35 crc kubenswrapper[4752]: I0122 10:46:35.991224 4752 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02c4deb0-67e3-49a9-bc07-cb2568391afa-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:35 crc kubenswrapper[4752]: I0122 10:46:35.991298 4752 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02c4deb0-67e3-49a9-bc07-cb2568391afa-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.016185 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02c4deb0-67e3-49a9-bc07-cb2568391afa-config-data" (OuterVolumeSpecName: "config-data") pod "02c4deb0-67e3-49a9-bc07-cb2568391afa" (UID: "02c4deb0-67e3-49a9-bc07-cb2568391afa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.093114 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02c4deb0-67e3-49a9-bc07-cb2568391afa-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.201617 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.214887 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.224703 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:46:36 crc kubenswrapper[4752]: E0122 10:46:36.239275 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c4deb0-67e3-49a9-bc07-cb2568391afa" containerName="sg-core" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.239532 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c4deb0-67e3-49a9-bc07-cb2568391afa" containerName="sg-core" Jan 22 10:46:36 crc kubenswrapper[4752]: E0122 10:46:36.239671 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c4deb0-67e3-49a9-bc07-cb2568391afa" containerName="proxy-httpd" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.239747 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c4deb0-67e3-49a9-bc07-cb2568391afa" containerName="proxy-httpd" Jan 22 10:46:36 crc kubenswrapper[4752]: E0122 10:46:36.239893 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c4deb0-67e3-49a9-bc07-cb2568391afa" containerName="ceilometer-central-agent" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.239984 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c4deb0-67e3-49a9-bc07-cb2568391afa" containerName="ceilometer-central-agent" Jan 22 10:46:36 crc kubenswrapper[4752]: E0122 10:46:36.240059 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c4deb0-67e3-49a9-bc07-cb2568391afa" containerName="ceilometer-notification-agent" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.240134 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c4deb0-67e3-49a9-bc07-cb2568391afa" containerName="ceilometer-notification-agent" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.240553 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="02c4deb0-67e3-49a9-bc07-cb2568391afa" containerName="sg-core" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.240806 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="02c4deb0-67e3-49a9-bc07-cb2568391afa" containerName="ceilometer-notification-agent" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.240916 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="02c4deb0-67e3-49a9-bc07-cb2568391afa" containerName="ceilometer-central-agent" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.241014 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="02c4deb0-67e3-49a9-bc07-cb2568391afa" containerName="proxy-httpd" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.242726 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.242934 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.254599 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.258618 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.399455 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/680367ae-6a30-4fc4-8c40-f03746ef1288-config-data\") pod \"ceilometer-0\" (UID: \"680367ae-6a30-4fc4-8c40-f03746ef1288\") " pod="openstack/ceilometer-0" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.399519 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/680367ae-6a30-4fc4-8c40-f03746ef1288-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"680367ae-6a30-4fc4-8c40-f03746ef1288\") " pod="openstack/ceilometer-0" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.399589 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn9cs\" (UniqueName: \"kubernetes.io/projected/680367ae-6a30-4fc4-8c40-f03746ef1288-kube-api-access-qn9cs\") pod \"ceilometer-0\" (UID: \"680367ae-6a30-4fc4-8c40-f03746ef1288\") " pod="openstack/ceilometer-0" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.399619 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/680367ae-6a30-4fc4-8c40-f03746ef1288-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"680367ae-6a30-4fc4-8c40-f03746ef1288\") " pod="openstack/ceilometer-0" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.399674 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/680367ae-6a30-4fc4-8c40-f03746ef1288-log-httpd\") pod \"ceilometer-0\" (UID: \"680367ae-6a30-4fc4-8c40-f03746ef1288\") " pod="openstack/ceilometer-0" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.399714 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/680367ae-6a30-4fc4-8c40-f03746ef1288-scripts\") pod \"ceilometer-0\" (UID: \"680367ae-6a30-4fc4-8c40-f03746ef1288\") " pod="openstack/ceilometer-0" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.399767 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/680367ae-6a30-4fc4-8c40-f03746ef1288-run-httpd\") pod \"ceilometer-0\" (UID: \"680367ae-6a30-4fc4-8c40-f03746ef1288\") " pod="openstack/ceilometer-0" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.501150 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/680367ae-6a30-4fc4-8c40-f03746ef1288-config-data\") pod \"ceilometer-0\" (UID: \"680367ae-6a30-4fc4-8c40-f03746ef1288\") " pod="openstack/ceilometer-0" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.501537 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/680367ae-6a30-4fc4-8c40-f03746ef1288-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"680367ae-6a30-4fc4-8c40-f03746ef1288\") " pod="openstack/ceilometer-0" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.501603 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn9cs\" (UniqueName: \"kubernetes.io/projected/680367ae-6a30-4fc4-8c40-f03746ef1288-kube-api-access-qn9cs\") pod \"ceilometer-0\" (UID: \"680367ae-6a30-4fc4-8c40-f03746ef1288\") " pod="openstack/ceilometer-0" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.501628 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/680367ae-6a30-4fc4-8c40-f03746ef1288-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"680367ae-6a30-4fc4-8c40-f03746ef1288\") " pod="openstack/ceilometer-0" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.501651 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/680367ae-6a30-4fc4-8c40-f03746ef1288-log-httpd\") pod \"ceilometer-0\" (UID: \"680367ae-6a30-4fc4-8c40-f03746ef1288\") " pod="openstack/ceilometer-0" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.501684 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/680367ae-6a30-4fc4-8c40-f03746ef1288-scripts\") pod \"ceilometer-0\" (UID: \"680367ae-6a30-4fc4-8c40-f03746ef1288\") " pod="openstack/ceilometer-0" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.501734 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/680367ae-6a30-4fc4-8c40-f03746ef1288-run-httpd\") pod \"ceilometer-0\" (UID: \"680367ae-6a30-4fc4-8c40-f03746ef1288\") " pod="openstack/ceilometer-0" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.502425 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/680367ae-6a30-4fc4-8c40-f03746ef1288-run-httpd\") pod \"ceilometer-0\" (UID: \"680367ae-6a30-4fc4-8c40-f03746ef1288\") " pod="openstack/ceilometer-0" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.503844 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/680367ae-6a30-4fc4-8c40-f03746ef1288-log-httpd\") pod \"ceilometer-0\" (UID: \"680367ae-6a30-4fc4-8c40-f03746ef1288\") " pod="openstack/ceilometer-0" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.508767 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/680367ae-6a30-4fc4-8c40-f03746ef1288-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"680367ae-6a30-4fc4-8c40-f03746ef1288\") " pod="openstack/ceilometer-0" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.509304 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/680367ae-6a30-4fc4-8c40-f03746ef1288-config-data\") pod \"ceilometer-0\" (UID: \"680367ae-6a30-4fc4-8c40-f03746ef1288\") " pod="openstack/ceilometer-0" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.523837 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/680367ae-6a30-4fc4-8c40-f03746ef1288-scripts\") pod \"ceilometer-0\" (UID: \"680367ae-6a30-4fc4-8c40-f03746ef1288\") " pod="openstack/ceilometer-0" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.524688 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/680367ae-6a30-4fc4-8c40-f03746ef1288-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"680367ae-6a30-4fc4-8c40-f03746ef1288\") " pod="openstack/ceilometer-0" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.527622 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn9cs\" (UniqueName: \"kubernetes.io/projected/680367ae-6a30-4fc4-8c40-f03746ef1288-kube-api-access-qn9cs\") pod \"ceilometer-0\" (UID: \"680367ae-6a30-4fc4-8c40-f03746ef1288\") " pod="openstack/ceilometer-0" Jan 22 10:46:36 crc kubenswrapper[4752]: I0122 10:46:36.576669 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:46:37 crc kubenswrapper[4752]: I0122 10:46:37.113502 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02c4deb0-67e3-49a9-bc07-cb2568391afa" path="/var/lib/kubelet/pods/02c4deb0-67e3-49a9-bc07-cb2568391afa/volumes" Jan 22 10:46:37 crc kubenswrapper[4752]: I0122 10:46:37.114666 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:46:37 crc kubenswrapper[4752]: I0122 10:46:37.860562 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"680367ae-6a30-4fc4-8c40-f03746ef1288","Type":"ContainerStarted","Data":"737865b0bfae2293b1783d9193f62edeff1b97a2a72d8fcf31d7aaf6ec103103"} Jan 22 10:46:38 crc kubenswrapper[4752]: I0122 10:46:38.870572 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"680367ae-6a30-4fc4-8c40-f03746ef1288","Type":"ContainerStarted","Data":"e19e55f72cff0ca0920b2c579961d08ebe4cd32179d382ea8142b0e19191495d"} Jan 22 10:46:39 crc kubenswrapper[4752]: I0122 10:46:39.370010 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 22 10:46:39 crc kubenswrapper[4752]: I0122 10:46:39.880730 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"680367ae-6a30-4fc4-8c40-f03746ef1288","Type":"ContainerStarted","Data":"153ce50797dd5f691eec3265271048a3f4d454f34455a9a1f7bf47df2268f3ae"} Jan 22 10:46:41 crc kubenswrapper[4752]: I0122 10:46:41.446237 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-b9sfs"] Jan 22 10:46:41 crc kubenswrapper[4752]: I0122 10:46:41.447810 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-b9sfs" Jan 22 10:46:41 crc kubenswrapper[4752]: I0122 10:46:41.452947 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 22 10:46:41 crc kubenswrapper[4752]: I0122 10:46:41.453164 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 22 10:46:41 crc kubenswrapper[4752]: I0122 10:46:41.481785 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-b9sfs"] Jan 22 10:46:41 crc kubenswrapper[4752]: I0122 10:46:41.617147 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3d61b3d-9898-4e80-9ab1-9693bb60fcbe-config-data\") pod \"nova-cell0-cell-mapping-b9sfs\" (UID: \"b3d61b3d-9898-4e80-9ab1-9693bb60fcbe\") " pod="openstack/nova-cell0-cell-mapping-b9sfs" Jan 22 10:46:41 crc kubenswrapper[4752]: I0122 10:46:41.617243 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3d61b3d-9898-4e80-9ab1-9693bb60fcbe-scripts\") pod \"nova-cell0-cell-mapping-b9sfs\" (UID: \"b3d61b3d-9898-4e80-9ab1-9693bb60fcbe\") " pod="openstack/nova-cell0-cell-mapping-b9sfs" Jan 22 10:46:41 crc kubenswrapper[4752]: I0122 10:46:41.617451 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n2vq\" (UniqueName: \"kubernetes.io/projected/b3d61b3d-9898-4e80-9ab1-9693bb60fcbe-kube-api-access-6n2vq\") pod \"nova-cell0-cell-mapping-b9sfs\" (UID: \"b3d61b3d-9898-4e80-9ab1-9693bb60fcbe\") " pod="openstack/nova-cell0-cell-mapping-b9sfs" Jan 22 10:46:41 crc kubenswrapper[4752]: I0122 10:46:41.617490 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3d61b3d-9898-4e80-9ab1-9693bb60fcbe-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-b9sfs\" (UID: \"b3d61b3d-9898-4e80-9ab1-9693bb60fcbe\") " pod="openstack/nova-cell0-cell-mapping-b9sfs" Jan 22 10:46:41 crc kubenswrapper[4752]: I0122 10:46:41.718539 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n2vq\" (UniqueName: \"kubernetes.io/projected/b3d61b3d-9898-4e80-9ab1-9693bb60fcbe-kube-api-access-6n2vq\") pod \"nova-cell0-cell-mapping-b9sfs\" (UID: \"b3d61b3d-9898-4e80-9ab1-9693bb60fcbe\") " pod="openstack/nova-cell0-cell-mapping-b9sfs" Jan 22 10:46:41 crc kubenswrapper[4752]: I0122 10:46:41.718585 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3d61b3d-9898-4e80-9ab1-9693bb60fcbe-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-b9sfs\" (UID: \"b3d61b3d-9898-4e80-9ab1-9693bb60fcbe\") " pod="openstack/nova-cell0-cell-mapping-b9sfs" Jan 22 10:46:41 crc kubenswrapper[4752]: I0122 10:46:41.718633 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3d61b3d-9898-4e80-9ab1-9693bb60fcbe-config-data\") pod \"nova-cell0-cell-mapping-b9sfs\" (UID: \"b3d61b3d-9898-4e80-9ab1-9693bb60fcbe\") " pod="openstack/nova-cell0-cell-mapping-b9sfs" Jan 22 10:46:41 crc kubenswrapper[4752]: I0122 10:46:41.718650 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3d61b3d-9898-4e80-9ab1-9693bb60fcbe-scripts\") pod \"nova-cell0-cell-mapping-b9sfs\" (UID: \"b3d61b3d-9898-4e80-9ab1-9693bb60fcbe\") " pod="openstack/nova-cell0-cell-mapping-b9sfs" Jan 22 10:46:41 crc kubenswrapper[4752]: I0122 10:46:41.726000 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3d61b3d-9898-4e80-9ab1-9693bb60fcbe-config-data\") pod \"nova-cell0-cell-mapping-b9sfs\" (UID: \"b3d61b3d-9898-4e80-9ab1-9693bb60fcbe\") " pod="openstack/nova-cell0-cell-mapping-b9sfs" Jan 22 10:46:41 crc kubenswrapper[4752]: I0122 10:46:41.726633 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3d61b3d-9898-4e80-9ab1-9693bb60fcbe-scripts\") pod \"nova-cell0-cell-mapping-b9sfs\" (UID: \"b3d61b3d-9898-4e80-9ab1-9693bb60fcbe\") " pod="openstack/nova-cell0-cell-mapping-b9sfs" Jan 22 10:46:41 crc kubenswrapper[4752]: I0122 10:46:41.730521 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3d61b3d-9898-4e80-9ab1-9693bb60fcbe-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-b9sfs\" (UID: \"b3d61b3d-9898-4e80-9ab1-9693bb60fcbe\") " pod="openstack/nova-cell0-cell-mapping-b9sfs" Jan 22 10:46:41 crc kubenswrapper[4752]: I0122 10:46:41.734851 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n2vq\" (UniqueName: \"kubernetes.io/projected/b3d61b3d-9898-4e80-9ab1-9693bb60fcbe-kube-api-access-6n2vq\") pod \"nova-cell0-cell-mapping-b9sfs\" (UID: \"b3d61b3d-9898-4e80-9ab1-9693bb60fcbe\") " pod="openstack/nova-cell0-cell-mapping-b9sfs" Jan 22 10:46:41 crc kubenswrapper[4752]: I0122 10:46:41.781693 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-b9sfs" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.103135 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.105359 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.107465 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.164529 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.226973 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.228587 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.231520 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.234240 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14e37871-b85a-4134-abd6-b8bfb0c6b696-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"14e37871-b85a-4134-abd6-b8bfb0c6b696\") " pod="openstack/nova-api-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.234539 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wlhc\" (UniqueName: \"kubernetes.io/projected/14e37871-b85a-4134-abd6-b8bfb0c6b696-kube-api-access-2wlhc\") pod \"nova-api-0\" (UID: \"14e37871-b85a-4134-abd6-b8bfb0c6b696\") " pod="openstack/nova-api-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.234594 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14e37871-b85a-4134-abd6-b8bfb0c6b696-logs\") pod \"nova-api-0\" (UID: \"14e37871-b85a-4134-abd6-b8bfb0c6b696\") " pod="openstack/nova-api-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.234620 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14e37871-b85a-4134-abd6-b8bfb0c6b696-config-data\") pod \"nova-api-0\" (UID: \"14e37871-b85a-4134-abd6-b8bfb0c6b696\") " pod="openstack/nova-api-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.311975 4752 scope.go:117] "RemoveContainer" containerID="f6ca29422e8ead25b199f3c0854c1606f594038573c17af13f7a7994056512f3" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.331233 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.332628 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.339233 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.339602 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/548b5447-ce2e-4ef7-afb7-75f25e34d513-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"548b5447-ce2e-4ef7-afb7-75f25e34d513\") " pod="openstack/nova-metadata-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.343125 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14e37871-b85a-4134-abd6-b8bfb0c6b696-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"14e37871-b85a-4134-abd6-b8bfb0c6b696\") " pod="openstack/nova-api-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.343395 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wlhc\" (UniqueName: \"kubernetes.io/projected/14e37871-b85a-4134-abd6-b8bfb0c6b696-kube-api-access-2wlhc\") pod \"nova-api-0\" (UID: \"14e37871-b85a-4134-abd6-b8bfb0c6b696\") " pod="openstack/nova-api-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.343432 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/548b5447-ce2e-4ef7-afb7-75f25e34d513-config-data\") pod \"nova-metadata-0\" (UID: \"548b5447-ce2e-4ef7-afb7-75f25e34d513\") " pod="openstack/nova-metadata-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.343484 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14e37871-b85a-4134-abd6-b8bfb0c6b696-logs\") pod \"nova-api-0\" (UID: \"14e37871-b85a-4134-abd6-b8bfb0c6b696\") " pod="openstack/nova-api-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.343508 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14e37871-b85a-4134-abd6-b8bfb0c6b696-config-data\") pod \"nova-api-0\" (UID: \"14e37871-b85a-4134-abd6-b8bfb0c6b696\") " pod="openstack/nova-api-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.343665 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/548b5447-ce2e-4ef7-afb7-75f25e34d513-logs\") pod \"nova-metadata-0\" (UID: \"548b5447-ce2e-4ef7-afb7-75f25e34d513\") " pod="openstack/nova-metadata-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.343700 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-785gj\" (UniqueName: \"kubernetes.io/projected/548b5447-ce2e-4ef7-afb7-75f25e34d513-kube-api-access-785gj\") pod \"nova-metadata-0\" (UID: \"548b5447-ce2e-4ef7-afb7-75f25e34d513\") " pod="openstack/nova-metadata-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.346158 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14e37871-b85a-4134-abd6-b8bfb0c6b696-logs\") pod \"nova-api-0\" (UID: \"14e37871-b85a-4134-abd6-b8bfb0c6b696\") " pod="openstack/nova-api-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.361258 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.361271 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14e37871-b85a-4134-abd6-b8bfb0c6b696-config-data\") pod \"nova-api-0\" (UID: \"14e37871-b85a-4134-abd6-b8bfb0c6b696\") " pod="openstack/nova-api-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.374983 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14e37871-b85a-4134-abd6-b8bfb0c6b696-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"14e37871-b85a-4134-abd6-b8bfb0c6b696\") " pod="openstack/nova-api-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.379006 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.379574 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wlhc\" (UniqueName: \"kubernetes.io/projected/14e37871-b85a-4134-abd6-b8bfb0c6b696-kube-api-access-2wlhc\") pod \"nova-api-0\" (UID: \"14e37871-b85a-4134-abd6-b8bfb0c6b696\") " pod="openstack/nova-api-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.386845 4752 scope.go:117] "RemoveContainer" containerID="ac384a087713918af368ea0354af014fc0475237fa108046304072b84e8ca6af" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.426918 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7995555d47-n9qlx"] Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.430405 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7995555d47-n9qlx" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.435922 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7995555d47-n9qlx"] Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.445142 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/548b5447-ce2e-4ef7-afb7-75f25e34d513-config-data\") pod \"nova-metadata-0\" (UID: \"548b5447-ce2e-4ef7-afb7-75f25e34d513\") " pod="openstack/nova-metadata-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.448193 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/548b5447-ce2e-4ef7-afb7-75f25e34d513-logs\") pod \"nova-metadata-0\" (UID: \"548b5447-ce2e-4ef7-afb7-75f25e34d513\") " pod="openstack/nova-metadata-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.448234 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-785gj\" (UniqueName: \"kubernetes.io/projected/548b5447-ce2e-4ef7-afb7-75f25e34d513-kube-api-access-785gj\") pod \"nova-metadata-0\" (UID: \"548b5447-ce2e-4ef7-afb7-75f25e34d513\") " pod="openstack/nova-metadata-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.448268 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddlxx\" (UniqueName: \"kubernetes.io/projected/bf48cd2f-5dee-42bc-a49e-35ae01dde6be-kube-api-access-ddlxx\") pod \"nova-scheduler-0\" (UID: \"bf48cd2f-5dee-42bc-a49e-35ae01dde6be\") " pod="openstack/nova-scheduler-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.448374 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf48cd2f-5dee-42bc-a49e-35ae01dde6be-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bf48cd2f-5dee-42bc-a49e-35ae01dde6be\") " pod="openstack/nova-scheduler-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.448397 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/548b5447-ce2e-4ef7-afb7-75f25e34d513-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"548b5447-ce2e-4ef7-afb7-75f25e34d513\") " pod="openstack/nova-metadata-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.448476 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf48cd2f-5dee-42bc-a49e-35ae01dde6be-config-data\") pod \"nova-scheduler-0\" (UID: \"bf48cd2f-5dee-42bc-a49e-35ae01dde6be\") " pod="openstack/nova-scheduler-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.449855 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/548b5447-ce2e-4ef7-afb7-75f25e34d513-logs\") pod \"nova-metadata-0\" (UID: \"548b5447-ce2e-4ef7-afb7-75f25e34d513\") " pod="openstack/nova-metadata-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.451711 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/548b5447-ce2e-4ef7-afb7-75f25e34d513-config-data\") pod \"nova-metadata-0\" (UID: \"548b5447-ce2e-4ef7-afb7-75f25e34d513\") " pod="openstack/nova-metadata-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.453138 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-b9sfs"] Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.453680 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/548b5447-ce2e-4ef7-afb7-75f25e34d513-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"548b5447-ce2e-4ef7-afb7-75f25e34d513\") " pod="openstack/nova-metadata-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.472358 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-785gj\" (UniqueName: \"kubernetes.io/projected/548b5447-ce2e-4ef7-afb7-75f25e34d513-kube-api-access-785gj\") pod \"nova-metadata-0\" (UID: \"548b5447-ce2e-4ef7-afb7-75f25e34d513\") " pod="openstack/nova-metadata-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.486405 4752 scope.go:117] "RemoveContainer" containerID="2fdd59b50f120087910514e7e280f9b550d1334935b3136960ece9806106fe98" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.507661 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.509172 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.511998 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.516333 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.550765 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71047b15-65b1-4b7d-ab73-effd16c9aa8a-config\") pod \"dnsmasq-dns-7995555d47-n9qlx\" (UID: \"71047b15-65b1-4b7d-ab73-effd16c9aa8a\") " pod="openstack/dnsmasq-dns-7995555d47-n9qlx" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.550814 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ef5e1c-28bf-44f3-b614-7ce57b1daf4c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"42ef5e1c-28bf-44f3-b614-7ce57b1daf4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.550848 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ef5e1c-28bf-44f3-b614-7ce57b1daf4c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"42ef5e1c-28bf-44f3-b614-7ce57b1daf4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.550920 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddlxx\" (UniqueName: \"kubernetes.io/projected/bf48cd2f-5dee-42bc-a49e-35ae01dde6be-kube-api-access-ddlxx\") pod \"nova-scheduler-0\" (UID: \"bf48cd2f-5dee-42bc-a49e-35ae01dde6be\") " pod="openstack/nova-scheduler-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.550949 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwl2j\" (UniqueName: \"kubernetes.io/projected/42ef5e1c-28bf-44f3-b614-7ce57b1daf4c-kube-api-access-hwl2j\") pod \"nova-cell1-novncproxy-0\" (UID: \"42ef5e1c-28bf-44f3-b614-7ce57b1daf4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.550972 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71047b15-65b1-4b7d-ab73-effd16c9aa8a-ovsdbserver-nb\") pod \"dnsmasq-dns-7995555d47-n9qlx\" (UID: \"71047b15-65b1-4b7d-ab73-effd16c9aa8a\") " pod="openstack/dnsmasq-dns-7995555d47-n9qlx" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.550996 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71047b15-65b1-4b7d-ab73-effd16c9aa8a-ovsdbserver-sb\") pod \"dnsmasq-dns-7995555d47-n9qlx\" (UID: \"71047b15-65b1-4b7d-ab73-effd16c9aa8a\") " pod="openstack/dnsmasq-dns-7995555d47-n9qlx" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.551030 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf48cd2f-5dee-42bc-a49e-35ae01dde6be-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bf48cd2f-5dee-42bc-a49e-35ae01dde6be\") " pod="openstack/nova-scheduler-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.551069 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf48cd2f-5dee-42bc-a49e-35ae01dde6be-config-data\") pod \"nova-scheduler-0\" (UID: \"bf48cd2f-5dee-42bc-a49e-35ae01dde6be\") " pod="openstack/nova-scheduler-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.551087 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/71047b15-65b1-4b7d-ab73-effd16c9aa8a-dns-swift-storage-0\") pod \"dnsmasq-dns-7995555d47-n9qlx\" (UID: \"71047b15-65b1-4b7d-ab73-effd16c9aa8a\") " pod="openstack/dnsmasq-dns-7995555d47-n9qlx" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.551110 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71047b15-65b1-4b7d-ab73-effd16c9aa8a-dns-svc\") pod \"dnsmasq-dns-7995555d47-n9qlx\" (UID: \"71047b15-65b1-4b7d-ab73-effd16c9aa8a\") " pod="openstack/dnsmasq-dns-7995555d47-n9qlx" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.551140 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rr9l\" (UniqueName: \"kubernetes.io/projected/71047b15-65b1-4b7d-ab73-effd16c9aa8a-kube-api-access-2rr9l\") pod \"dnsmasq-dns-7995555d47-n9qlx\" (UID: \"71047b15-65b1-4b7d-ab73-effd16c9aa8a\") " pod="openstack/dnsmasq-dns-7995555d47-n9qlx" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.557649 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf48cd2f-5dee-42bc-a49e-35ae01dde6be-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bf48cd2f-5dee-42bc-a49e-35ae01dde6be\") " pod="openstack/nova-scheduler-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.558430 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf48cd2f-5dee-42bc-a49e-35ae01dde6be-config-data\") pod \"nova-scheduler-0\" (UID: \"bf48cd2f-5dee-42bc-a49e-35ae01dde6be\") " pod="openstack/nova-scheduler-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.564402 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.572376 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddlxx\" (UniqueName: \"kubernetes.io/projected/bf48cd2f-5dee-42bc-a49e-35ae01dde6be-kube-api-access-ddlxx\") pod \"nova-scheduler-0\" (UID: \"bf48cd2f-5dee-42bc-a49e-35ae01dde6be\") " pod="openstack/nova-scheduler-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.594130 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.653266 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71047b15-65b1-4b7d-ab73-effd16c9aa8a-dns-svc\") pod \"dnsmasq-dns-7995555d47-n9qlx\" (UID: \"71047b15-65b1-4b7d-ab73-effd16c9aa8a\") " pod="openstack/dnsmasq-dns-7995555d47-n9qlx" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.653334 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rr9l\" (UniqueName: \"kubernetes.io/projected/71047b15-65b1-4b7d-ab73-effd16c9aa8a-kube-api-access-2rr9l\") pod \"dnsmasq-dns-7995555d47-n9qlx\" (UID: \"71047b15-65b1-4b7d-ab73-effd16c9aa8a\") " pod="openstack/dnsmasq-dns-7995555d47-n9qlx" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.653393 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71047b15-65b1-4b7d-ab73-effd16c9aa8a-config\") pod \"dnsmasq-dns-7995555d47-n9qlx\" (UID: \"71047b15-65b1-4b7d-ab73-effd16c9aa8a\") " pod="openstack/dnsmasq-dns-7995555d47-n9qlx" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.653426 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ef5e1c-28bf-44f3-b614-7ce57b1daf4c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"42ef5e1c-28bf-44f3-b614-7ce57b1daf4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.653465 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ef5e1c-28bf-44f3-b614-7ce57b1daf4c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"42ef5e1c-28bf-44f3-b614-7ce57b1daf4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.653559 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwl2j\" (UniqueName: \"kubernetes.io/projected/42ef5e1c-28bf-44f3-b614-7ce57b1daf4c-kube-api-access-hwl2j\") pod \"nova-cell1-novncproxy-0\" (UID: \"42ef5e1c-28bf-44f3-b614-7ce57b1daf4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.653584 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71047b15-65b1-4b7d-ab73-effd16c9aa8a-ovsdbserver-nb\") pod \"dnsmasq-dns-7995555d47-n9qlx\" (UID: \"71047b15-65b1-4b7d-ab73-effd16c9aa8a\") " pod="openstack/dnsmasq-dns-7995555d47-n9qlx" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.653608 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71047b15-65b1-4b7d-ab73-effd16c9aa8a-ovsdbserver-sb\") pod \"dnsmasq-dns-7995555d47-n9qlx\" (UID: \"71047b15-65b1-4b7d-ab73-effd16c9aa8a\") " pod="openstack/dnsmasq-dns-7995555d47-n9qlx" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.653652 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/71047b15-65b1-4b7d-ab73-effd16c9aa8a-dns-swift-storage-0\") pod \"dnsmasq-dns-7995555d47-n9qlx\" (UID: \"71047b15-65b1-4b7d-ab73-effd16c9aa8a\") " pod="openstack/dnsmasq-dns-7995555d47-n9qlx" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.659406 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ef5e1c-28bf-44f3-b614-7ce57b1daf4c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"42ef5e1c-28bf-44f3-b614-7ce57b1daf4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.660234 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/71047b15-65b1-4b7d-ab73-effd16c9aa8a-dns-swift-storage-0\") pod \"dnsmasq-dns-7995555d47-n9qlx\" (UID: \"71047b15-65b1-4b7d-ab73-effd16c9aa8a\") " pod="openstack/dnsmasq-dns-7995555d47-n9qlx" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.661653 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71047b15-65b1-4b7d-ab73-effd16c9aa8a-dns-svc\") pod \"dnsmasq-dns-7995555d47-n9qlx\" (UID: \"71047b15-65b1-4b7d-ab73-effd16c9aa8a\") " pod="openstack/dnsmasq-dns-7995555d47-n9qlx" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.662955 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71047b15-65b1-4b7d-ab73-effd16c9aa8a-config\") pod \"dnsmasq-dns-7995555d47-n9qlx\" (UID: \"71047b15-65b1-4b7d-ab73-effd16c9aa8a\") " pod="openstack/dnsmasq-dns-7995555d47-n9qlx" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.663534 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71047b15-65b1-4b7d-ab73-effd16c9aa8a-ovsdbserver-nb\") pod \"dnsmasq-dns-7995555d47-n9qlx\" (UID: \"71047b15-65b1-4b7d-ab73-effd16c9aa8a\") " pod="openstack/dnsmasq-dns-7995555d47-n9qlx" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.664372 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ef5e1c-28bf-44f3-b614-7ce57b1daf4c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"42ef5e1c-28bf-44f3-b614-7ce57b1daf4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.665043 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71047b15-65b1-4b7d-ab73-effd16c9aa8a-ovsdbserver-sb\") pod \"dnsmasq-dns-7995555d47-n9qlx\" (UID: \"71047b15-65b1-4b7d-ab73-effd16c9aa8a\") " pod="openstack/dnsmasq-dns-7995555d47-n9qlx" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.675919 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rr9l\" (UniqueName: \"kubernetes.io/projected/71047b15-65b1-4b7d-ab73-effd16c9aa8a-kube-api-access-2rr9l\") pod \"dnsmasq-dns-7995555d47-n9qlx\" (UID: \"71047b15-65b1-4b7d-ab73-effd16c9aa8a\") " pod="openstack/dnsmasq-dns-7995555d47-n9qlx" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.685065 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.690291 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwl2j\" (UniqueName: \"kubernetes.io/projected/42ef5e1c-28bf-44f3-b614-7ce57b1daf4c-kube-api-access-hwl2j\") pod \"nova-cell1-novncproxy-0\" (UID: \"42ef5e1c-28bf-44f3-b614-7ce57b1daf4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.754125 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7995555d47-n9qlx" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.834935 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.900195 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-clql4"] Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.902034 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-clql4" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.905132 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.905617 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.915462 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-clql4"] Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.931969 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-b9sfs" event={"ID":"b3d61b3d-9898-4e80-9ab1-9693bb60fcbe","Type":"ContainerStarted","Data":"b1256696a149ffb295c59cf8235f48341acf4a84955f640e0b22d0d352fc70fc"} Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.932007 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-b9sfs" event={"ID":"b3d61b3d-9898-4e80-9ab1-9693bb60fcbe","Type":"ContainerStarted","Data":"4a7e1edf076281edb00bfc5d068bae510786b284060205c8ca8c514c11d3be25"} Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.941316 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"680367ae-6a30-4fc4-8c40-f03746ef1288","Type":"ContainerStarted","Data":"3633a1daee0c105af074e3ab517add0dc7e16751f9a406715cf0a2e598221c0e"} Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.962367 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93e19f28-ee06-4011-99f5-76be05faf55f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-clql4\" (UID: \"93e19f28-ee06-4011-99f5-76be05faf55f\") " pod="openstack/nova-cell1-conductor-db-sync-clql4" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.962554 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxn2j\" (UniqueName: \"kubernetes.io/projected/93e19f28-ee06-4011-99f5-76be05faf55f-kube-api-access-wxn2j\") pod \"nova-cell1-conductor-db-sync-clql4\" (UID: \"93e19f28-ee06-4011-99f5-76be05faf55f\") " pod="openstack/nova-cell1-conductor-db-sync-clql4" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.962886 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93e19f28-ee06-4011-99f5-76be05faf55f-scripts\") pod \"nova-cell1-conductor-db-sync-clql4\" (UID: \"93e19f28-ee06-4011-99f5-76be05faf55f\") " pod="openstack/nova-cell1-conductor-db-sync-clql4" Jan 22 10:46:42 crc kubenswrapper[4752]: I0122 10:46:42.962998 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93e19f28-ee06-4011-99f5-76be05faf55f-config-data\") pod \"nova-cell1-conductor-db-sync-clql4\" (UID: \"93e19f28-ee06-4011-99f5-76be05faf55f\") " pod="openstack/nova-cell1-conductor-db-sync-clql4" Jan 22 10:46:43 crc kubenswrapper[4752]: I0122 10:46:43.023619 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-b9sfs" podStartSLOduration=2.023593694 podStartE2EDuration="2.023593694s" podCreationTimestamp="2026-01-22 10:46:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:46:42.975616405 +0000 UTC m=+1282.205559323" watchObservedRunningTime="2026-01-22 10:46:43.023593694 +0000 UTC m=+1282.253536602" Jan 22 10:46:43 crc kubenswrapper[4752]: I0122 10:46:43.067261 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93e19f28-ee06-4011-99f5-76be05faf55f-config-data\") pod \"nova-cell1-conductor-db-sync-clql4\" (UID: \"93e19f28-ee06-4011-99f5-76be05faf55f\") " pod="openstack/nova-cell1-conductor-db-sync-clql4" Jan 22 10:46:43 crc kubenswrapper[4752]: I0122 10:46:43.067389 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93e19f28-ee06-4011-99f5-76be05faf55f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-clql4\" (UID: \"93e19f28-ee06-4011-99f5-76be05faf55f\") " pod="openstack/nova-cell1-conductor-db-sync-clql4" Jan 22 10:46:43 crc kubenswrapper[4752]: I0122 10:46:43.067415 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxn2j\" (UniqueName: \"kubernetes.io/projected/93e19f28-ee06-4011-99f5-76be05faf55f-kube-api-access-wxn2j\") pod \"nova-cell1-conductor-db-sync-clql4\" (UID: \"93e19f28-ee06-4011-99f5-76be05faf55f\") " pod="openstack/nova-cell1-conductor-db-sync-clql4" Jan 22 10:46:43 crc kubenswrapper[4752]: I0122 10:46:43.067574 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93e19f28-ee06-4011-99f5-76be05faf55f-scripts\") pod \"nova-cell1-conductor-db-sync-clql4\" (UID: \"93e19f28-ee06-4011-99f5-76be05faf55f\") " pod="openstack/nova-cell1-conductor-db-sync-clql4" Jan 22 10:46:43 crc kubenswrapper[4752]: I0122 10:46:43.103634 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93e19f28-ee06-4011-99f5-76be05faf55f-config-data\") pod \"nova-cell1-conductor-db-sync-clql4\" (UID: \"93e19f28-ee06-4011-99f5-76be05faf55f\") " pod="openstack/nova-cell1-conductor-db-sync-clql4" Jan 22 10:46:43 crc kubenswrapper[4752]: I0122 10:46:43.115645 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxn2j\" (UniqueName: \"kubernetes.io/projected/93e19f28-ee06-4011-99f5-76be05faf55f-kube-api-access-wxn2j\") pod \"nova-cell1-conductor-db-sync-clql4\" (UID: \"93e19f28-ee06-4011-99f5-76be05faf55f\") " pod="openstack/nova-cell1-conductor-db-sync-clql4" Jan 22 10:46:43 crc kubenswrapper[4752]: I0122 10:46:43.117108 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93e19f28-ee06-4011-99f5-76be05faf55f-scripts\") pod \"nova-cell1-conductor-db-sync-clql4\" (UID: \"93e19f28-ee06-4011-99f5-76be05faf55f\") " pod="openstack/nova-cell1-conductor-db-sync-clql4" Jan 22 10:46:43 crc kubenswrapper[4752]: I0122 10:46:43.117562 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93e19f28-ee06-4011-99f5-76be05faf55f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-clql4\" (UID: \"93e19f28-ee06-4011-99f5-76be05faf55f\") " pod="openstack/nova-cell1-conductor-db-sync-clql4" Jan 22 10:46:43 crc kubenswrapper[4752]: I0122 10:46:43.407773 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-clql4" Jan 22 10:46:43 crc kubenswrapper[4752]: I0122 10:46:43.592062 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:46:43 crc kubenswrapper[4752]: I0122 10:46:43.806128 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 10:46:43 crc kubenswrapper[4752]: I0122 10:46:43.960524 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"548b5447-ce2e-4ef7-afb7-75f25e34d513","Type":"ContainerStarted","Data":"e1c0c1006c12203d2cb88e9292de7c7d2401a8c18b89d3e641f300f58997c91c"} Jan 22 10:46:43 crc kubenswrapper[4752]: I0122 10:46:43.970112 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14e37871-b85a-4134-abd6-b8bfb0c6b696","Type":"ContainerStarted","Data":"5888fe19c2c235cdcd6c7c6a8842071a4a5ed2cbfa350a4486e82b2dcb409829"} Jan 22 10:46:44 crc kubenswrapper[4752]: I0122 10:46:44.091602 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 10:46:44 crc kubenswrapper[4752]: W0122 10:46:44.128386 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71047b15_65b1_4b7d_ab73_effd16c9aa8a.slice/crio-655598428e7daabf635e443917da8191c0b3e5ccf4bcc22ef7afd5150584eca9 WatchSource:0}: Error finding container 655598428e7daabf635e443917da8191c0b3e5ccf4bcc22ef7afd5150584eca9: Status 404 returned error can't find the container with id 655598428e7daabf635e443917da8191c0b3e5ccf4bcc22ef7afd5150584eca9 Jan 22 10:46:44 crc kubenswrapper[4752]: I0122 10:46:44.131849 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7995555d47-n9qlx"] Jan 22 10:46:44 crc kubenswrapper[4752]: I0122 10:46:44.315807 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 10:46:44 crc kubenswrapper[4752]: W0122 10:46:44.351855 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf48cd2f_5dee_42bc_a49e_35ae01dde6be.slice/crio-f1cc69ccfea2621ba5e8e3ded0176ce7551c0b66cb65245d3d737cb2895b53b9 WatchSource:0}: Error finding container f1cc69ccfea2621ba5e8e3ded0176ce7551c0b66cb65245d3d737cb2895b53b9: Status 404 returned error can't find the container with id f1cc69ccfea2621ba5e8e3ded0176ce7551c0b66cb65245d3d737cb2895b53b9 Jan 22 10:46:44 crc kubenswrapper[4752]: I0122 10:46:44.376617 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-clql4"] Jan 22 10:46:44 crc kubenswrapper[4752]: I0122 10:46:44.999144 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-clql4" event={"ID":"93e19f28-ee06-4011-99f5-76be05faf55f","Type":"ContainerStarted","Data":"bbcc8a94edc9a1e6d721521a1e7e5ade57527721e1f2e72f99f26fc66ce68656"} Jan 22 10:46:44 crc kubenswrapper[4752]: I0122 10:46:44.999513 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-clql4" event={"ID":"93e19f28-ee06-4011-99f5-76be05faf55f","Type":"ContainerStarted","Data":"6bb458393477c6303f62da84905a0c8940fc8fd20bb31a1eb615a1510ae3d7c0"} Jan 22 10:46:45 crc kubenswrapper[4752]: I0122 10:46:45.009982 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"42ef5e1c-28bf-44f3-b614-7ce57b1daf4c","Type":"ContainerStarted","Data":"8f538dcae8452e535925d2584dd73e4883444a1c34e1353c0cf400a87efb7f6f"} Jan 22 10:46:45 crc kubenswrapper[4752]: I0122 10:46:45.020652 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-clql4" podStartSLOduration=3.020637671 podStartE2EDuration="3.020637671s" podCreationTimestamp="2026-01-22 10:46:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:46:45.01630984 +0000 UTC m=+1284.246252748" watchObservedRunningTime="2026-01-22 10:46:45.020637671 +0000 UTC m=+1284.250580579" Jan 22 10:46:45 crc kubenswrapper[4752]: I0122 10:46:45.064215 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"680367ae-6a30-4fc4-8c40-f03746ef1288","Type":"ContainerStarted","Data":"39e6643d2a300ec0a93441b03b26821bf8d8d594b5faffce38a3496a9acb2ff5"} Jan 22 10:46:45 crc kubenswrapper[4752]: I0122 10:46:45.065114 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 10:46:45 crc kubenswrapper[4752]: I0122 10:46:45.068848 4752 generic.go:334] "Generic (PLEG): container finished" podID="71047b15-65b1-4b7d-ab73-effd16c9aa8a" containerID="db4d0388cfc5f66d28abb18ea1b4e43ee689feb237449e81071e415a537f0273" exitCode=0 Jan 22 10:46:45 crc kubenswrapper[4752]: I0122 10:46:45.068939 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7995555d47-n9qlx" event={"ID":"71047b15-65b1-4b7d-ab73-effd16c9aa8a","Type":"ContainerDied","Data":"db4d0388cfc5f66d28abb18ea1b4e43ee689feb237449e81071e415a537f0273"} Jan 22 10:46:45 crc kubenswrapper[4752]: I0122 10:46:45.068972 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7995555d47-n9qlx" event={"ID":"71047b15-65b1-4b7d-ab73-effd16c9aa8a","Type":"ContainerStarted","Data":"655598428e7daabf635e443917da8191c0b3e5ccf4bcc22ef7afd5150584eca9"} Jan 22 10:46:45 crc kubenswrapper[4752]: I0122 10:46:45.070051 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bf48cd2f-5dee-42bc-a49e-35ae01dde6be","Type":"ContainerStarted","Data":"f1cc69ccfea2621ba5e8e3ded0176ce7551c0b66cb65245d3d737cb2895b53b9"} Jan 22 10:46:45 crc kubenswrapper[4752]: I0122 10:46:45.124976 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.544371791 podStartE2EDuration="9.124950011s" podCreationTimestamp="2026-01-22 10:46:36 +0000 UTC" firstStartedPulling="2026-01-22 10:46:37.107775892 +0000 UTC m=+1276.337718800" lastFinishedPulling="2026-01-22 10:46:43.688354112 +0000 UTC m=+1282.918297020" observedRunningTime="2026-01-22 10:46:45.091878085 +0000 UTC m=+1284.321821003" watchObservedRunningTime="2026-01-22 10:46:45.124950011 +0000 UTC m=+1284.354892919" Jan 22 10:46:46 crc kubenswrapper[4752]: I0122 10:46:46.090295 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7995555d47-n9qlx" event={"ID":"71047b15-65b1-4b7d-ab73-effd16c9aa8a","Type":"ContainerStarted","Data":"180a4898828bf0feff15a0c0513d50f30eeb85109faf23d5449e1e4a72c67a5e"} Jan 22 10:46:46 crc kubenswrapper[4752]: I0122 10:46:46.091097 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7995555d47-n9qlx" Jan 22 10:46:46 crc kubenswrapper[4752]: I0122 10:46:46.118968 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7995555d47-n9qlx" podStartSLOduration=4.118935458 podStartE2EDuration="4.118935458s" podCreationTimestamp="2026-01-22 10:46:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:46:46.112282018 +0000 UTC m=+1285.342224926" watchObservedRunningTime="2026-01-22 10:46:46.118935458 +0000 UTC m=+1285.348878376" Jan 22 10:46:46 crc kubenswrapper[4752]: I0122 10:46:46.505847 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 10:46:46 crc kubenswrapper[4752]: I0122 10:46:46.518827 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:46:50 crc kubenswrapper[4752]: I0122 10:46:50.139528 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bf48cd2f-5dee-42bc-a49e-35ae01dde6be","Type":"ContainerStarted","Data":"19189a3a9e98e2a62e272fc878bd6bae0dda0a910afa894e21608b7788221b6a"} Jan 22 10:46:50 crc kubenswrapper[4752]: I0122 10:46:50.142233 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"42ef5e1c-28bf-44f3-b614-7ce57b1daf4c","Type":"ContainerStarted","Data":"d7088b18489a94fcd40eeaefab252c773eefed8a34508f1cb786a25c66442c30"} Jan 22 10:46:50 crc kubenswrapper[4752]: I0122 10:46:50.142342 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="42ef5e1c-28bf-44f3-b614-7ce57b1daf4c" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://d7088b18489a94fcd40eeaefab252c773eefed8a34508f1cb786a25c66442c30" gracePeriod=30 Jan 22 10:46:50 crc kubenswrapper[4752]: I0122 10:46:50.152721 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14e37871-b85a-4134-abd6-b8bfb0c6b696","Type":"ContainerStarted","Data":"72e72a27219abe8f199c156a09bb44a9aa6859927c0f6bc403d491d34f86b122"} Jan 22 10:46:50 crc kubenswrapper[4752]: I0122 10:46:50.152817 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14e37871-b85a-4134-abd6-b8bfb0c6b696","Type":"ContainerStarted","Data":"2028f02f63595797cbddc422ece706f563f3cde41f307bc3b174df26c83f4f5c"} Jan 22 10:46:50 crc kubenswrapper[4752]: I0122 10:46:50.155276 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"548b5447-ce2e-4ef7-afb7-75f25e34d513","Type":"ContainerStarted","Data":"d638eaf073d3fe879712af4105b21a53062c742929ef058ea15085891c57237b"} Jan 22 10:46:50 crc kubenswrapper[4752]: I0122 10:46:50.155320 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"548b5447-ce2e-4ef7-afb7-75f25e34d513","Type":"ContainerStarted","Data":"e4d997e217a9d45bbd8c068033a9a1c56c235079fa94030f97a7c270219b37ab"} Jan 22 10:46:50 crc kubenswrapper[4752]: I0122 10:46:50.155450 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="548b5447-ce2e-4ef7-afb7-75f25e34d513" containerName="nova-metadata-metadata" containerID="cri-o://d638eaf073d3fe879712af4105b21a53062c742929ef058ea15085891c57237b" gracePeriod=30 Jan 22 10:46:50 crc kubenswrapper[4752]: I0122 10:46:50.155447 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="548b5447-ce2e-4ef7-afb7-75f25e34d513" containerName="nova-metadata-log" containerID="cri-o://e4d997e217a9d45bbd8c068033a9a1c56c235079fa94030f97a7c270219b37ab" gracePeriod=30 Jan 22 10:46:50 crc kubenswrapper[4752]: I0122 10:46:50.175703 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.4447900000000002 podStartE2EDuration="8.175686736s" podCreationTimestamp="2026-01-22 10:46:42 +0000 UTC" firstStartedPulling="2026-01-22 10:46:44.36437421 +0000 UTC m=+1283.594317118" lastFinishedPulling="2026-01-22 10:46:49.095270946 +0000 UTC m=+1288.325213854" observedRunningTime="2026-01-22 10:46:50.166933842 +0000 UTC m=+1289.396876750" watchObservedRunningTime="2026-01-22 10:46:50.175686736 +0000 UTC m=+1289.405629644" Jan 22 10:46:50 crc kubenswrapper[4752]: I0122 10:46:50.200681 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.218015464 podStartE2EDuration="8.200629404s" podCreationTimestamp="2026-01-22 10:46:42 +0000 UTC" firstStartedPulling="2026-01-22 10:46:44.112370338 +0000 UTC m=+1283.342313246" lastFinishedPulling="2026-01-22 10:46:49.094984258 +0000 UTC m=+1288.324927186" observedRunningTime="2026-01-22 10:46:50.193095812 +0000 UTC m=+1289.423038720" watchObservedRunningTime="2026-01-22 10:46:50.200629404 +0000 UTC m=+1289.430572312" Jan 22 10:46:50 crc kubenswrapper[4752]: I0122 10:46:50.226711 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.032429462 podStartE2EDuration="8.226681581s" podCreationTimestamp="2026-01-22 10:46:42 +0000 UTC" firstStartedPulling="2026-01-22 10:46:43.900645017 +0000 UTC m=+1283.130587925" lastFinishedPulling="2026-01-22 10:46:49.094897136 +0000 UTC m=+1288.324840044" observedRunningTime="2026-01-22 10:46:50.212925669 +0000 UTC m=+1289.442868597" watchObservedRunningTime="2026-01-22 10:46:50.226681581 +0000 UTC m=+1289.456624489" Jan 22 10:46:50 crc kubenswrapper[4752]: I0122 10:46:50.250110 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.84563098 podStartE2EDuration="8.25007676s" podCreationTimestamp="2026-01-22 10:46:42 +0000 UTC" firstStartedPulling="2026-01-22 10:46:43.694787057 +0000 UTC m=+1282.924729965" lastFinishedPulling="2026-01-22 10:46:49.099232827 +0000 UTC m=+1288.329175745" observedRunningTime="2026-01-22 10:46:50.238794431 +0000 UTC m=+1289.468737339" watchObservedRunningTime="2026-01-22 10:46:50.25007676 +0000 UTC m=+1289.480019668" Jan 22 10:46:51 crc kubenswrapper[4752]: I0122 10:46:51.171138 4752 generic.go:334] "Generic (PLEG): container finished" podID="548b5447-ce2e-4ef7-afb7-75f25e34d513" containerID="d638eaf073d3fe879712af4105b21a53062c742929ef058ea15085891c57237b" exitCode=0 Jan 22 10:46:51 crc kubenswrapper[4752]: I0122 10:46:51.171429 4752 generic.go:334] "Generic (PLEG): container finished" podID="548b5447-ce2e-4ef7-afb7-75f25e34d513" containerID="e4d997e217a9d45bbd8c068033a9a1c56c235079fa94030f97a7c270219b37ab" exitCode=143 Jan 22 10:46:51 crc kubenswrapper[4752]: I0122 10:46:51.171522 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"548b5447-ce2e-4ef7-afb7-75f25e34d513","Type":"ContainerDied","Data":"d638eaf073d3fe879712af4105b21a53062c742929ef058ea15085891c57237b"} Jan 22 10:46:51 crc kubenswrapper[4752]: I0122 10:46:51.171568 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"548b5447-ce2e-4ef7-afb7-75f25e34d513","Type":"ContainerDied","Data":"e4d997e217a9d45bbd8c068033a9a1c56c235079fa94030f97a7c270219b37ab"} Jan 22 10:46:51 crc kubenswrapper[4752]: I0122 10:46:51.282988 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 10:46:51 crc kubenswrapper[4752]: I0122 10:46:51.405826 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/548b5447-ce2e-4ef7-afb7-75f25e34d513-logs\") pod \"548b5447-ce2e-4ef7-afb7-75f25e34d513\" (UID: \"548b5447-ce2e-4ef7-afb7-75f25e34d513\") " Jan 22 10:46:51 crc kubenswrapper[4752]: I0122 10:46:51.405952 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-785gj\" (UniqueName: \"kubernetes.io/projected/548b5447-ce2e-4ef7-afb7-75f25e34d513-kube-api-access-785gj\") pod \"548b5447-ce2e-4ef7-afb7-75f25e34d513\" (UID: \"548b5447-ce2e-4ef7-afb7-75f25e34d513\") " Jan 22 10:46:51 crc kubenswrapper[4752]: I0122 10:46:51.406024 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/548b5447-ce2e-4ef7-afb7-75f25e34d513-config-data\") pod \"548b5447-ce2e-4ef7-afb7-75f25e34d513\" (UID: \"548b5447-ce2e-4ef7-afb7-75f25e34d513\") " Jan 22 10:46:51 crc kubenswrapper[4752]: I0122 10:46:51.406117 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/548b5447-ce2e-4ef7-afb7-75f25e34d513-logs" (OuterVolumeSpecName: "logs") pod "548b5447-ce2e-4ef7-afb7-75f25e34d513" (UID: "548b5447-ce2e-4ef7-afb7-75f25e34d513"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:46:51 crc kubenswrapper[4752]: I0122 10:46:51.406214 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/548b5447-ce2e-4ef7-afb7-75f25e34d513-combined-ca-bundle\") pod \"548b5447-ce2e-4ef7-afb7-75f25e34d513\" (UID: \"548b5447-ce2e-4ef7-afb7-75f25e34d513\") " Jan 22 10:46:51 crc kubenswrapper[4752]: I0122 10:46:51.406617 4752 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/548b5447-ce2e-4ef7-afb7-75f25e34d513-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:51 crc kubenswrapper[4752]: I0122 10:46:51.413335 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/548b5447-ce2e-4ef7-afb7-75f25e34d513-kube-api-access-785gj" (OuterVolumeSpecName: "kube-api-access-785gj") pod "548b5447-ce2e-4ef7-afb7-75f25e34d513" (UID: "548b5447-ce2e-4ef7-afb7-75f25e34d513"). InnerVolumeSpecName "kube-api-access-785gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:46:51 crc kubenswrapper[4752]: I0122 10:46:51.439761 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/548b5447-ce2e-4ef7-afb7-75f25e34d513-config-data" (OuterVolumeSpecName: "config-data") pod "548b5447-ce2e-4ef7-afb7-75f25e34d513" (UID: "548b5447-ce2e-4ef7-afb7-75f25e34d513"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:46:51 crc kubenswrapper[4752]: I0122 10:46:51.441205 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/548b5447-ce2e-4ef7-afb7-75f25e34d513-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "548b5447-ce2e-4ef7-afb7-75f25e34d513" (UID: "548b5447-ce2e-4ef7-afb7-75f25e34d513"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:46:51 crc kubenswrapper[4752]: I0122 10:46:51.511472 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/548b5447-ce2e-4ef7-afb7-75f25e34d513-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:51 crc kubenswrapper[4752]: I0122 10:46:51.512533 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/548b5447-ce2e-4ef7-afb7-75f25e34d513-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:51 crc kubenswrapper[4752]: I0122 10:46:51.512561 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-785gj\" (UniqueName: \"kubernetes.io/projected/548b5447-ce2e-4ef7-afb7-75f25e34d513-kube-api-access-785gj\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.185073 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"548b5447-ce2e-4ef7-afb7-75f25e34d513","Type":"ContainerDied","Data":"e1c0c1006c12203d2cb88e9292de7c7d2401a8c18b89d3e641f300f58997c91c"} Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.185122 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.185137 4752 scope.go:117] "RemoveContainer" containerID="d638eaf073d3fe879712af4105b21a53062c742929ef058ea15085891c57237b" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.230119 4752 scope.go:117] "RemoveContainer" containerID="e4d997e217a9d45bbd8c068033a9a1c56c235079fa94030f97a7c270219b37ab" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.243325 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.268934 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.285694 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:46:52 crc kubenswrapper[4752]: E0122 10:46:52.286104 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="548b5447-ce2e-4ef7-afb7-75f25e34d513" containerName="nova-metadata-log" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.286126 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="548b5447-ce2e-4ef7-afb7-75f25e34d513" containerName="nova-metadata-log" Jan 22 10:46:52 crc kubenswrapper[4752]: E0122 10:46:52.286158 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="548b5447-ce2e-4ef7-afb7-75f25e34d513" containerName="nova-metadata-metadata" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.286166 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="548b5447-ce2e-4ef7-afb7-75f25e34d513" containerName="nova-metadata-metadata" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.286375 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="548b5447-ce2e-4ef7-afb7-75f25e34d513" containerName="nova-metadata-log" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.286410 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="548b5447-ce2e-4ef7-afb7-75f25e34d513" containerName="nova-metadata-metadata" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.287442 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.289810 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.291608 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.294311 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.437009 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3db375bb-394f-4c80-93c3-9145d730ddd8-config-data\") pod \"nova-metadata-0\" (UID: \"3db375bb-394f-4c80-93c3-9145d730ddd8\") " pod="openstack/nova-metadata-0" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.437242 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3db375bb-394f-4c80-93c3-9145d730ddd8-logs\") pod \"nova-metadata-0\" (UID: \"3db375bb-394f-4c80-93c3-9145d730ddd8\") " pod="openstack/nova-metadata-0" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.437407 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3db375bb-394f-4c80-93c3-9145d730ddd8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3db375bb-394f-4c80-93c3-9145d730ddd8\") " pod="openstack/nova-metadata-0" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.437454 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r822j\" (UniqueName: \"kubernetes.io/projected/3db375bb-394f-4c80-93c3-9145d730ddd8-kube-api-access-r822j\") pod \"nova-metadata-0\" (UID: \"3db375bb-394f-4c80-93c3-9145d730ddd8\") " pod="openstack/nova-metadata-0" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.437598 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3db375bb-394f-4c80-93c3-9145d730ddd8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3db375bb-394f-4c80-93c3-9145d730ddd8\") " pod="openstack/nova-metadata-0" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.539459 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3db375bb-394f-4c80-93c3-9145d730ddd8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3db375bb-394f-4c80-93c3-9145d730ddd8\") " pod="openstack/nova-metadata-0" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.539527 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r822j\" (UniqueName: \"kubernetes.io/projected/3db375bb-394f-4c80-93c3-9145d730ddd8-kube-api-access-r822j\") pod \"nova-metadata-0\" (UID: \"3db375bb-394f-4c80-93c3-9145d730ddd8\") " pod="openstack/nova-metadata-0" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.539680 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3db375bb-394f-4c80-93c3-9145d730ddd8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3db375bb-394f-4c80-93c3-9145d730ddd8\") " pod="openstack/nova-metadata-0" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.539749 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3db375bb-394f-4c80-93c3-9145d730ddd8-config-data\") pod \"nova-metadata-0\" (UID: \"3db375bb-394f-4c80-93c3-9145d730ddd8\") " pod="openstack/nova-metadata-0" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.539913 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3db375bb-394f-4c80-93c3-9145d730ddd8-logs\") pod \"nova-metadata-0\" (UID: \"3db375bb-394f-4c80-93c3-9145d730ddd8\") " pod="openstack/nova-metadata-0" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.540734 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3db375bb-394f-4c80-93c3-9145d730ddd8-logs\") pod \"nova-metadata-0\" (UID: \"3db375bb-394f-4c80-93c3-9145d730ddd8\") " pod="openstack/nova-metadata-0" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.544467 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3db375bb-394f-4c80-93c3-9145d730ddd8-config-data\") pod \"nova-metadata-0\" (UID: \"3db375bb-394f-4c80-93c3-9145d730ddd8\") " pod="openstack/nova-metadata-0" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.544508 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3db375bb-394f-4c80-93c3-9145d730ddd8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3db375bb-394f-4c80-93c3-9145d730ddd8\") " pod="openstack/nova-metadata-0" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.545153 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3db375bb-394f-4c80-93c3-9145d730ddd8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3db375bb-394f-4c80-93c3-9145d730ddd8\") " pod="openstack/nova-metadata-0" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.559751 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r822j\" (UniqueName: \"kubernetes.io/projected/3db375bb-394f-4c80-93c3-9145d730ddd8-kube-api-access-r822j\") pod \"nova-metadata-0\" (UID: \"3db375bb-394f-4c80-93c3-9145d730ddd8\") " pod="openstack/nova-metadata-0" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.565746 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.565898 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.613643 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.686190 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.686270 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.725237 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.763132 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7995555d47-n9qlx" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.835550 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.869120 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85c94b455f-t6lr7"] Jan 22 10:46:52 crc kubenswrapper[4752]: I0122 10:46:52.869454 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85c94b455f-t6lr7" podUID="ba33f293-195a-44c7-9b7f-6f57716c4fa8" containerName="dnsmasq-dns" containerID="cri-o://0e70974e53dad844fb03bc4851b13ebae7c5106575d98127898a815f2adf80d8" gracePeriod=10 Jan 22 10:46:53 crc kubenswrapper[4752]: I0122 10:46:53.114601 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="548b5447-ce2e-4ef7-afb7-75f25e34d513" path="/var/lib/kubelet/pods/548b5447-ce2e-4ef7-afb7-75f25e34d513/volumes" Jan 22 10:46:53 crc kubenswrapper[4752]: I0122 10:46:53.173115 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:46:53 crc kubenswrapper[4752]: W0122 10:46:53.186036 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3db375bb_394f_4c80_93c3_9145d730ddd8.slice/crio-3a28bc864fa2ca931bf338624decc3b6b8b6de0fccf4cc969283b6f471d91b09 WatchSource:0}: Error finding container 3a28bc864fa2ca931bf338624decc3b6b8b6de0fccf4cc969283b6f471d91b09: Status 404 returned error can't find the container with id 3a28bc864fa2ca931bf338624decc3b6b8b6de0fccf4cc969283b6f471d91b09 Jan 22 10:46:53 crc kubenswrapper[4752]: I0122 10:46:53.234417 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 22 10:46:53 crc kubenswrapper[4752]: I0122 10:46:53.649347 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="14e37871-b85a-4134-abd6-b8bfb0c6b696" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.210:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 10:46:53 crc kubenswrapper[4752]: I0122 10:46:53.649433 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="14e37871-b85a-4134-abd6-b8bfb0c6b696" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.210:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 10:46:54 crc kubenswrapper[4752]: I0122 10:46:54.205552 4752 generic.go:334] "Generic (PLEG): container finished" podID="ba33f293-195a-44c7-9b7f-6f57716c4fa8" containerID="0e70974e53dad844fb03bc4851b13ebae7c5106575d98127898a815f2adf80d8" exitCode=0 Jan 22 10:46:54 crc kubenswrapper[4752]: I0122 10:46:54.205652 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85c94b455f-t6lr7" event={"ID":"ba33f293-195a-44c7-9b7f-6f57716c4fa8","Type":"ContainerDied","Data":"0e70974e53dad844fb03bc4851b13ebae7c5106575d98127898a815f2adf80d8"} Jan 22 10:46:54 crc kubenswrapper[4752]: I0122 10:46:54.207234 4752 generic.go:334] "Generic (PLEG): container finished" podID="b3d61b3d-9898-4e80-9ab1-9693bb60fcbe" containerID="b1256696a149ffb295c59cf8235f48341acf4a84955f640e0b22d0d352fc70fc" exitCode=0 Jan 22 10:46:54 crc kubenswrapper[4752]: I0122 10:46:54.207303 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-b9sfs" event={"ID":"b3d61b3d-9898-4e80-9ab1-9693bb60fcbe","Type":"ContainerDied","Data":"b1256696a149ffb295c59cf8235f48341acf4a84955f640e0b22d0d352fc70fc"} Jan 22 10:46:54 crc kubenswrapper[4752]: I0122 10:46:54.209732 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3db375bb-394f-4c80-93c3-9145d730ddd8","Type":"ContainerStarted","Data":"90b7a37ca22bb5b016157f01b974f084bba7cc0f601464805704ed5dbe96bc9c"} Jan 22 10:46:54 crc kubenswrapper[4752]: I0122 10:46:54.209767 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3db375bb-394f-4c80-93c3-9145d730ddd8","Type":"ContainerStarted","Data":"3a28bc864fa2ca931bf338624decc3b6b8b6de0fccf4cc969283b6f471d91b09"} Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.224519 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85c94b455f-t6lr7" Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.228195 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85c94b455f-t6lr7" event={"ID":"ba33f293-195a-44c7-9b7f-6f57716c4fa8","Type":"ContainerDied","Data":"da90a9cf82f5e631582e3611a39d7325d87e90862f009c2c825efd6fd32d2374"} Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.228235 4752 scope.go:117] "RemoveContainer" containerID="0e70974e53dad844fb03bc4851b13ebae7c5106575d98127898a815f2adf80d8" Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.239887 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3db375bb-394f-4c80-93c3-9145d730ddd8","Type":"ContainerStarted","Data":"e6532e3c401020a81219beb2412f2270f19f9e279acb866ef31126847b841479"} Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.277408 4752 scope.go:117] "RemoveContainer" containerID="3c54cda7d53f54fddf651d1a3dc0df08148d9c7a45677d43676fbf24127cafe7" Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.283125 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.283108422 podStartE2EDuration="3.283108422s" podCreationTimestamp="2026-01-22 10:46:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:46:55.276983845 +0000 UTC m=+1294.506926763" watchObservedRunningTime="2026-01-22 10:46:55.283108422 +0000 UTC m=+1294.513051330" Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.312175 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba33f293-195a-44c7-9b7f-6f57716c4fa8-dns-svc\") pod \"ba33f293-195a-44c7-9b7f-6f57716c4fa8\" (UID: \"ba33f293-195a-44c7-9b7f-6f57716c4fa8\") " Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.312299 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba33f293-195a-44c7-9b7f-6f57716c4fa8-config\") pod \"ba33f293-195a-44c7-9b7f-6f57716c4fa8\" (UID: \"ba33f293-195a-44c7-9b7f-6f57716c4fa8\") " Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.312381 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba33f293-195a-44c7-9b7f-6f57716c4fa8-ovsdbserver-sb\") pod \"ba33f293-195a-44c7-9b7f-6f57716c4fa8\" (UID: \"ba33f293-195a-44c7-9b7f-6f57716c4fa8\") " Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.312472 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba33f293-195a-44c7-9b7f-6f57716c4fa8-dns-swift-storage-0\") pod \"ba33f293-195a-44c7-9b7f-6f57716c4fa8\" (UID: \"ba33f293-195a-44c7-9b7f-6f57716c4fa8\") " Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.312589 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba33f293-195a-44c7-9b7f-6f57716c4fa8-ovsdbserver-nb\") pod \"ba33f293-195a-44c7-9b7f-6f57716c4fa8\" (UID: \"ba33f293-195a-44c7-9b7f-6f57716c4fa8\") " Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.312675 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hbm8\" (UniqueName: \"kubernetes.io/projected/ba33f293-195a-44c7-9b7f-6f57716c4fa8-kube-api-access-9hbm8\") pod \"ba33f293-195a-44c7-9b7f-6f57716c4fa8\" (UID: \"ba33f293-195a-44c7-9b7f-6f57716c4fa8\") " Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.346748 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba33f293-195a-44c7-9b7f-6f57716c4fa8-kube-api-access-9hbm8" (OuterVolumeSpecName: "kube-api-access-9hbm8") pod "ba33f293-195a-44c7-9b7f-6f57716c4fa8" (UID: "ba33f293-195a-44c7-9b7f-6f57716c4fa8"). InnerVolumeSpecName "kube-api-access-9hbm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.394984 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba33f293-195a-44c7-9b7f-6f57716c4fa8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ba33f293-195a-44c7-9b7f-6f57716c4fa8" (UID: "ba33f293-195a-44c7-9b7f-6f57716c4fa8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.399292 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba33f293-195a-44c7-9b7f-6f57716c4fa8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ba33f293-195a-44c7-9b7f-6f57716c4fa8" (UID: "ba33f293-195a-44c7-9b7f-6f57716c4fa8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.407227 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba33f293-195a-44c7-9b7f-6f57716c4fa8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ba33f293-195a-44c7-9b7f-6f57716c4fa8" (UID: "ba33f293-195a-44c7-9b7f-6f57716c4fa8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.410514 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba33f293-195a-44c7-9b7f-6f57716c4fa8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ba33f293-195a-44c7-9b7f-6f57716c4fa8" (UID: "ba33f293-195a-44c7-9b7f-6f57716c4fa8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.415915 4752 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba33f293-195a-44c7-9b7f-6f57716c4fa8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.417840 4752 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba33f293-195a-44c7-9b7f-6f57716c4fa8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.419279 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hbm8\" (UniqueName: \"kubernetes.io/projected/ba33f293-195a-44c7-9b7f-6f57716c4fa8-kube-api-access-9hbm8\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.419402 4752 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba33f293-195a-44c7-9b7f-6f57716c4fa8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.419491 4752 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba33f293-195a-44c7-9b7f-6f57716c4fa8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.419242 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba33f293-195a-44c7-9b7f-6f57716c4fa8-config" (OuterVolumeSpecName: "config") pod "ba33f293-195a-44c7-9b7f-6f57716c4fa8" (UID: "ba33f293-195a-44c7-9b7f-6f57716c4fa8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.522601 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba33f293-195a-44c7-9b7f-6f57716c4fa8-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.586755 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-b9sfs" Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.725568 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3d61b3d-9898-4e80-9ab1-9693bb60fcbe-config-data\") pod \"b3d61b3d-9898-4e80-9ab1-9693bb60fcbe\" (UID: \"b3d61b3d-9898-4e80-9ab1-9693bb60fcbe\") " Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.725782 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6n2vq\" (UniqueName: \"kubernetes.io/projected/b3d61b3d-9898-4e80-9ab1-9693bb60fcbe-kube-api-access-6n2vq\") pod \"b3d61b3d-9898-4e80-9ab1-9693bb60fcbe\" (UID: \"b3d61b3d-9898-4e80-9ab1-9693bb60fcbe\") " Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.725914 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3d61b3d-9898-4e80-9ab1-9693bb60fcbe-scripts\") pod \"b3d61b3d-9898-4e80-9ab1-9693bb60fcbe\" (UID: \"b3d61b3d-9898-4e80-9ab1-9693bb60fcbe\") " Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.725980 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3d61b3d-9898-4e80-9ab1-9693bb60fcbe-combined-ca-bundle\") pod \"b3d61b3d-9898-4e80-9ab1-9693bb60fcbe\" (UID: \"b3d61b3d-9898-4e80-9ab1-9693bb60fcbe\") " Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.735384 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3d61b3d-9898-4e80-9ab1-9693bb60fcbe-scripts" (OuterVolumeSpecName: "scripts") pod "b3d61b3d-9898-4e80-9ab1-9693bb60fcbe" (UID: "b3d61b3d-9898-4e80-9ab1-9693bb60fcbe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.739099 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3d61b3d-9898-4e80-9ab1-9693bb60fcbe-kube-api-access-6n2vq" (OuterVolumeSpecName: "kube-api-access-6n2vq") pod "b3d61b3d-9898-4e80-9ab1-9693bb60fcbe" (UID: "b3d61b3d-9898-4e80-9ab1-9693bb60fcbe"). InnerVolumeSpecName "kube-api-access-6n2vq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.754666 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3d61b3d-9898-4e80-9ab1-9693bb60fcbe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3d61b3d-9898-4e80-9ab1-9693bb60fcbe" (UID: "b3d61b3d-9898-4e80-9ab1-9693bb60fcbe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.755096 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3d61b3d-9898-4e80-9ab1-9693bb60fcbe-config-data" (OuterVolumeSpecName: "config-data") pod "b3d61b3d-9898-4e80-9ab1-9693bb60fcbe" (UID: "b3d61b3d-9898-4e80-9ab1-9693bb60fcbe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.828232 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6n2vq\" (UniqueName: \"kubernetes.io/projected/b3d61b3d-9898-4e80-9ab1-9693bb60fcbe-kube-api-access-6n2vq\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.828528 4752 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3d61b3d-9898-4e80-9ab1-9693bb60fcbe-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.828545 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3d61b3d-9898-4e80-9ab1-9693bb60fcbe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:55 crc kubenswrapper[4752]: I0122 10:46:55.828554 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3d61b3d-9898-4e80-9ab1-9693bb60fcbe-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:56 crc kubenswrapper[4752]: I0122 10:46:56.260637 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85c94b455f-t6lr7" Jan 22 10:46:56 crc kubenswrapper[4752]: I0122 10:46:56.263410 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-b9sfs" event={"ID":"b3d61b3d-9898-4e80-9ab1-9693bb60fcbe","Type":"ContainerDied","Data":"4a7e1edf076281edb00bfc5d068bae510786b284060205c8ca8c514c11d3be25"} Jan 22 10:46:56 crc kubenswrapper[4752]: I0122 10:46:56.263466 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-b9sfs" Jan 22 10:46:56 crc kubenswrapper[4752]: I0122 10:46:56.263487 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a7e1edf076281edb00bfc5d068bae510786b284060205c8ca8c514c11d3be25" Jan 22 10:46:56 crc kubenswrapper[4752]: I0122 10:46:56.308970 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85c94b455f-t6lr7"] Jan 22 10:46:56 crc kubenswrapper[4752]: I0122 10:46:56.323184 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85c94b455f-t6lr7"] Jan 22 10:46:56 crc kubenswrapper[4752]: I0122 10:46:56.446279 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 10:46:56 crc kubenswrapper[4752]: I0122 10:46:56.446633 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="14e37871-b85a-4134-abd6-b8bfb0c6b696" containerName="nova-api-log" containerID="cri-o://2028f02f63595797cbddc422ece706f563f3cde41f307bc3b174df26c83f4f5c" gracePeriod=30 Jan 22 10:46:56 crc kubenswrapper[4752]: I0122 10:46:56.447203 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="14e37871-b85a-4134-abd6-b8bfb0c6b696" containerName="nova-api-api" containerID="cri-o://72e72a27219abe8f199c156a09bb44a9aa6859927c0f6bc403d491d34f86b122" gracePeriod=30 Jan 22 10:46:56 crc kubenswrapper[4752]: I0122 10:46:56.458296 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 10:46:56 crc kubenswrapper[4752]: I0122 10:46:56.458539 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="bf48cd2f-5dee-42bc-a49e-35ae01dde6be" containerName="nova-scheduler-scheduler" containerID="cri-o://19189a3a9e98e2a62e272fc878bd6bae0dda0a910afa894e21608b7788221b6a" gracePeriod=30 Jan 22 10:46:56 crc kubenswrapper[4752]: I0122 10:46:56.501800 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:46:57 crc kubenswrapper[4752]: I0122 10:46:57.109792 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba33f293-195a-44c7-9b7f-6f57716c4fa8" path="/var/lib/kubelet/pods/ba33f293-195a-44c7-9b7f-6f57716c4fa8/volumes" Jan 22 10:46:57 crc kubenswrapper[4752]: I0122 10:46:57.277303 4752 generic.go:334] "Generic (PLEG): container finished" podID="14e37871-b85a-4134-abd6-b8bfb0c6b696" containerID="2028f02f63595797cbddc422ece706f563f3cde41f307bc3b174df26c83f4f5c" exitCode=143 Jan 22 10:46:57 crc kubenswrapper[4752]: I0122 10:46:57.277387 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14e37871-b85a-4134-abd6-b8bfb0c6b696","Type":"ContainerDied","Data":"2028f02f63595797cbddc422ece706f563f3cde41f307bc3b174df26c83f4f5c"} Jan 22 10:46:57 crc kubenswrapper[4752]: I0122 10:46:57.277605 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3db375bb-394f-4c80-93c3-9145d730ddd8" containerName="nova-metadata-log" containerID="cri-o://90b7a37ca22bb5b016157f01b974f084bba7cc0f601464805704ed5dbe96bc9c" gracePeriod=30 Jan 22 10:46:57 crc kubenswrapper[4752]: I0122 10:46:57.277710 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3db375bb-394f-4c80-93c3-9145d730ddd8" containerName="nova-metadata-metadata" containerID="cri-o://e6532e3c401020a81219beb2412f2270f19f9e279acb866ef31126847b841479" gracePeriod=30 Jan 22 10:46:57 crc kubenswrapper[4752]: I0122 10:46:57.619768 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 10:46:57 crc kubenswrapper[4752]: I0122 10:46:57.619846 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 10:46:57 crc kubenswrapper[4752]: E0122 10:46:57.694669 4752 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 19189a3a9e98e2a62e272fc878bd6bae0dda0a910afa894e21608b7788221b6a is running failed: container process not found" containerID="19189a3a9e98e2a62e272fc878bd6bae0dda0a910afa894e21608b7788221b6a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 10:46:57 crc kubenswrapper[4752]: E0122 10:46:57.696133 4752 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 19189a3a9e98e2a62e272fc878bd6bae0dda0a910afa894e21608b7788221b6a is running failed: container process not found" containerID="19189a3a9e98e2a62e272fc878bd6bae0dda0a910afa894e21608b7788221b6a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 10:46:57 crc kubenswrapper[4752]: E0122 10:46:57.696719 4752 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 19189a3a9e98e2a62e272fc878bd6bae0dda0a910afa894e21608b7788221b6a is running failed: container process not found" containerID="19189a3a9e98e2a62e272fc878bd6bae0dda0a910afa894e21608b7788221b6a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 10:46:57 crc kubenswrapper[4752]: E0122 10:46:57.696786 4752 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 19189a3a9e98e2a62e272fc878bd6bae0dda0a910afa894e21608b7788221b6a is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="bf48cd2f-5dee-42bc-a49e-35ae01dde6be" containerName="nova-scheduler-scheduler" Jan 22 10:46:57 crc kubenswrapper[4752]: I0122 10:46:57.927285 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 10:46:57 crc kubenswrapper[4752]: I0122 10:46:57.938947 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 10:46:57 crc kubenswrapper[4752]: I0122 10:46:57.977931 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3db375bb-394f-4c80-93c3-9145d730ddd8-combined-ca-bundle\") pod \"3db375bb-394f-4c80-93c3-9145d730ddd8\" (UID: \"3db375bb-394f-4c80-93c3-9145d730ddd8\") " Jan 22 10:46:57 crc kubenswrapper[4752]: I0122 10:46:57.977986 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r822j\" (UniqueName: \"kubernetes.io/projected/3db375bb-394f-4c80-93c3-9145d730ddd8-kube-api-access-r822j\") pod \"3db375bb-394f-4c80-93c3-9145d730ddd8\" (UID: \"3db375bb-394f-4c80-93c3-9145d730ddd8\") " Jan 22 10:46:57 crc kubenswrapper[4752]: I0122 10:46:57.978105 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3db375bb-394f-4c80-93c3-9145d730ddd8-nova-metadata-tls-certs\") pod \"3db375bb-394f-4c80-93c3-9145d730ddd8\" (UID: \"3db375bb-394f-4c80-93c3-9145d730ddd8\") " Jan 22 10:46:57 crc kubenswrapper[4752]: I0122 10:46:57.978169 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3db375bb-394f-4c80-93c3-9145d730ddd8-config-data\") pod \"3db375bb-394f-4c80-93c3-9145d730ddd8\" (UID: \"3db375bb-394f-4c80-93c3-9145d730ddd8\") " Jan 22 10:46:57 crc kubenswrapper[4752]: I0122 10:46:57.978474 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3db375bb-394f-4c80-93c3-9145d730ddd8-logs\") pod \"3db375bb-394f-4c80-93c3-9145d730ddd8\" (UID: \"3db375bb-394f-4c80-93c3-9145d730ddd8\") " Jan 22 10:46:57 crc kubenswrapper[4752]: I0122 10:46:57.979376 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3db375bb-394f-4c80-93c3-9145d730ddd8-logs" (OuterVolumeSpecName: "logs") pod "3db375bb-394f-4c80-93c3-9145d730ddd8" (UID: "3db375bb-394f-4c80-93c3-9145d730ddd8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.000761 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3db375bb-394f-4c80-93c3-9145d730ddd8-kube-api-access-r822j" (OuterVolumeSpecName: "kube-api-access-r822j") pod "3db375bb-394f-4c80-93c3-9145d730ddd8" (UID: "3db375bb-394f-4c80-93c3-9145d730ddd8"). InnerVolumeSpecName "kube-api-access-r822j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.020498 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3db375bb-394f-4c80-93c3-9145d730ddd8-config-data" (OuterVolumeSpecName: "config-data") pod "3db375bb-394f-4c80-93c3-9145d730ddd8" (UID: "3db375bb-394f-4c80-93c3-9145d730ddd8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.024130 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3db375bb-394f-4c80-93c3-9145d730ddd8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3db375bb-394f-4c80-93c3-9145d730ddd8" (UID: "3db375bb-394f-4c80-93c3-9145d730ddd8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.056541 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3db375bb-394f-4c80-93c3-9145d730ddd8-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3db375bb-394f-4c80-93c3-9145d730ddd8" (UID: "3db375bb-394f-4c80-93c3-9145d730ddd8"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.080385 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlxx\" (UniqueName: \"kubernetes.io/projected/bf48cd2f-5dee-42bc-a49e-35ae01dde6be-kube-api-access-ddlxx\") pod \"bf48cd2f-5dee-42bc-a49e-35ae01dde6be\" (UID: \"bf48cd2f-5dee-42bc-a49e-35ae01dde6be\") " Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.080846 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf48cd2f-5dee-42bc-a49e-35ae01dde6be-combined-ca-bundle\") pod \"bf48cd2f-5dee-42bc-a49e-35ae01dde6be\" (UID: \"bf48cd2f-5dee-42bc-a49e-35ae01dde6be\") " Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.081071 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf48cd2f-5dee-42bc-a49e-35ae01dde6be-config-data\") pod \"bf48cd2f-5dee-42bc-a49e-35ae01dde6be\" (UID: \"bf48cd2f-5dee-42bc-a49e-35ae01dde6be\") " Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.081751 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3db375bb-394f-4c80-93c3-9145d730ddd8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.081848 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r822j\" (UniqueName: \"kubernetes.io/projected/3db375bb-394f-4c80-93c3-9145d730ddd8-kube-api-access-r822j\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.081972 4752 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3db375bb-394f-4c80-93c3-9145d730ddd8-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.082096 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3db375bb-394f-4c80-93c3-9145d730ddd8-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.082219 4752 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3db375bb-394f-4c80-93c3-9145d730ddd8-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.085609 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf48cd2f-5dee-42bc-a49e-35ae01dde6be-kube-api-access-ddlxx" (OuterVolumeSpecName: "kube-api-access-ddlxx") pod "bf48cd2f-5dee-42bc-a49e-35ae01dde6be" (UID: "bf48cd2f-5dee-42bc-a49e-35ae01dde6be"). InnerVolumeSpecName "kube-api-access-ddlxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.118509 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf48cd2f-5dee-42bc-a49e-35ae01dde6be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf48cd2f-5dee-42bc-a49e-35ae01dde6be" (UID: "bf48cd2f-5dee-42bc-a49e-35ae01dde6be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.123351 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf48cd2f-5dee-42bc-a49e-35ae01dde6be-config-data" (OuterVolumeSpecName: "config-data") pod "bf48cd2f-5dee-42bc-a49e-35ae01dde6be" (UID: "bf48cd2f-5dee-42bc-a49e-35ae01dde6be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.183852 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddlxx\" (UniqueName: \"kubernetes.io/projected/bf48cd2f-5dee-42bc-a49e-35ae01dde6be-kube-api-access-ddlxx\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.183908 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf48cd2f-5dee-42bc-a49e-35ae01dde6be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.183919 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf48cd2f-5dee-42bc-a49e-35ae01dde6be-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.303517 4752 generic.go:334] "Generic (PLEG): container finished" podID="bf48cd2f-5dee-42bc-a49e-35ae01dde6be" containerID="19189a3a9e98e2a62e272fc878bd6bae0dda0a910afa894e21608b7788221b6a" exitCode=0 Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.303755 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bf48cd2f-5dee-42bc-a49e-35ae01dde6be","Type":"ContainerDied","Data":"19189a3a9e98e2a62e272fc878bd6bae0dda0a910afa894e21608b7788221b6a"} Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.303790 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bf48cd2f-5dee-42bc-a49e-35ae01dde6be","Type":"ContainerDied","Data":"f1cc69ccfea2621ba5e8e3ded0176ce7551c0b66cb65245d3d737cb2895b53b9"} Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.303835 4752 scope.go:117] "RemoveContainer" containerID="19189a3a9e98e2a62e272fc878bd6bae0dda0a910afa894e21608b7788221b6a" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.304089 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.307337 4752 generic.go:334] "Generic (PLEG): container finished" podID="3db375bb-394f-4c80-93c3-9145d730ddd8" containerID="e6532e3c401020a81219beb2412f2270f19f9e279acb866ef31126847b841479" exitCode=0 Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.307369 4752 generic.go:334] "Generic (PLEG): container finished" podID="3db375bb-394f-4c80-93c3-9145d730ddd8" containerID="90b7a37ca22bb5b016157f01b974f084bba7cc0f601464805704ed5dbe96bc9c" exitCode=143 Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.307389 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3db375bb-394f-4c80-93c3-9145d730ddd8","Type":"ContainerDied","Data":"e6532e3c401020a81219beb2412f2270f19f9e279acb866ef31126847b841479"} Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.307412 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3db375bb-394f-4c80-93c3-9145d730ddd8","Type":"ContainerDied","Data":"90b7a37ca22bb5b016157f01b974f084bba7cc0f601464805704ed5dbe96bc9c"} Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.307425 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3db375bb-394f-4c80-93c3-9145d730ddd8","Type":"ContainerDied","Data":"3a28bc864fa2ca931bf338624decc3b6b8b6de0fccf4cc969283b6f471d91b09"} Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.307423 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.343303 4752 scope.go:117] "RemoveContainer" containerID="19189a3a9e98e2a62e272fc878bd6bae0dda0a910afa894e21608b7788221b6a" Jan 22 10:46:58 crc kubenswrapper[4752]: E0122 10:46:58.344021 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19189a3a9e98e2a62e272fc878bd6bae0dda0a910afa894e21608b7788221b6a\": container with ID starting with 19189a3a9e98e2a62e272fc878bd6bae0dda0a910afa894e21608b7788221b6a not found: ID does not exist" containerID="19189a3a9e98e2a62e272fc878bd6bae0dda0a910afa894e21608b7788221b6a" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.344049 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19189a3a9e98e2a62e272fc878bd6bae0dda0a910afa894e21608b7788221b6a"} err="failed to get container status \"19189a3a9e98e2a62e272fc878bd6bae0dda0a910afa894e21608b7788221b6a\": rpc error: code = NotFound desc = could not find container \"19189a3a9e98e2a62e272fc878bd6bae0dda0a910afa894e21608b7788221b6a\": container with ID starting with 19189a3a9e98e2a62e272fc878bd6bae0dda0a910afa894e21608b7788221b6a not found: ID does not exist" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.344070 4752 scope.go:117] "RemoveContainer" containerID="e6532e3c401020a81219beb2412f2270f19f9e279acb866ef31126847b841479" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.360683 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.382381 4752 scope.go:117] "RemoveContainer" containerID="90b7a37ca22bb5b016157f01b974f084bba7cc0f601464805704ed5dbe96bc9c" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.392517 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.421921 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.454747 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.469269 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 10:46:58 crc kubenswrapper[4752]: E0122 10:46:58.473559 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db375bb-394f-4c80-93c3-9145d730ddd8" containerName="nova-metadata-log" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.473601 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db375bb-394f-4c80-93c3-9145d730ddd8" containerName="nova-metadata-log" Jan 22 10:46:58 crc kubenswrapper[4752]: E0122 10:46:58.473613 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba33f293-195a-44c7-9b7f-6f57716c4fa8" containerName="init" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.473620 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba33f293-195a-44c7-9b7f-6f57716c4fa8" containerName="init" Jan 22 10:46:58 crc kubenswrapper[4752]: E0122 10:46:58.473630 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf48cd2f-5dee-42bc-a49e-35ae01dde6be" containerName="nova-scheduler-scheduler" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.473635 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf48cd2f-5dee-42bc-a49e-35ae01dde6be" containerName="nova-scheduler-scheduler" Jan 22 10:46:58 crc kubenswrapper[4752]: E0122 10:46:58.473649 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db375bb-394f-4c80-93c3-9145d730ddd8" containerName="nova-metadata-metadata" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.473655 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db375bb-394f-4c80-93c3-9145d730ddd8" containerName="nova-metadata-metadata" Jan 22 10:46:58 crc kubenswrapper[4752]: E0122 10:46:58.475995 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d61b3d-9898-4e80-9ab1-9693bb60fcbe" containerName="nova-manage" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.476031 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d61b3d-9898-4e80-9ab1-9693bb60fcbe" containerName="nova-manage" Jan 22 10:46:58 crc kubenswrapper[4752]: E0122 10:46:58.476059 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba33f293-195a-44c7-9b7f-6f57716c4fa8" containerName="dnsmasq-dns" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.476068 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba33f293-195a-44c7-9b7f-6f57716c4fa8" containerName="dnsmasq-dns" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.481416 4752 scope.go:117] "RemoveContainer" containerID="e6532e3c401020a81219beb2412f2270f19f9e279acb866ef31126847b841479" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.482063 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d61b3d-9898-4e80-9ab1-9693bb60fcbe" containerName="nova-manage" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.482113 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="3db375bb-394f-4c80-93c3-9145d730ddd8" containerName="nova-metadata-metadata" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.482137 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf48cd2f-5dee-42bc-a49e-35ae01dde6be" containerName="nova-scheduler-scheduler" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.482168 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba33f293-195a-44c7-9b7f-6f57716c4fa8" containerName="dnsmasq-dns" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.482182 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="3db375bb-394f-4c80-93c3-9145d730ddd8" containerName="nova-metadata-log" Jan 22 10:46:58 crc kubenswrapper[4752]: E0122 10:46:58.483673 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6532e3c401020a81219beb2412f2270f19f9e279acb866ef31126847b841479\": container with ID starting with e6532e3c401020a81219beb2412f2270f19f9e279acb866ef31126847b841479 not found: ID does not exist" containerID="e6532e3c401020a81219beb2412f2270f19f9e279acb866ef31126847b841479" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.483704 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6532e3c401020a81219beb2412f2270f19f9e279acb866ef31126847b841479"} err="failed to get container status \"e6532e3c401020a81219beb2412f2270f19f9e279acb866ef31126847b841479\": rpc error: code = NotFound desc = could not find container \"e6532e3c401020a81219beb2412f2270f19f9e279acb866ef31126847b841479\": container with ID starting with e6532e3c401020a81219beb2412f2270f19f9e279acb866ef31126847b841479 not found: ID does not exist" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.483725 4752 scope.go:117] "RemoveContainer" containerID="90b7a37ca22bb5b016157f01b974f084bba7cc0f601464805704ed5dbe96bc9c" Jan 22 10:46:58 crc kubenswrapper[4752]: E0122 10:46:58.484160 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90b7a37ca22bb5b016157f01b974f084bba7cc0f601464805704ed5dbe96bc9c\": container with ID starting with 90b7a37ca22bb5b016157f01b974f084bba7cc0f601464805704ed5dbe96bc9c not found: ID does not exist" containerID="90b7a37ca22bb5b016157f01b974f084bba7cc0f601464805704ed5dbe96bc9c" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.484179 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90b7a37ca22bb5b016157f01b974f084bba7cc0f601464805704ed5dbe96bc9c"} err="failed to get container status \"90b7a37ca22bb5b016157f01b974f084bba7cc0f601464805704ed5dbe96bc9c\": rpc error: code = NotFound desc = could not find container \"90b7a37ca22bb5b016157f01b974f084bba7cc0f601464805704ed5dbe96bc9c\": container with ID starting with 90b7a37ca22bb5b016157f01b974f084bba7cc0f601464805704ed5dbe96bc9c not found: ID does not exist" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.484400 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.484193 4752 scope.go:117] "RemoveContainer" containerID="e6532e3c401020a81219beb2412f2270f19f9e279acb866ef31126847b841479" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.484784 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6532e3c401020a81219beb2412f2270f19f9e279acb866ef31126847b841479"} err="failed to get container status \"e6532e3c401020a81219beb2412f2270f19f9e279acb866ef31126847b841479\": rpc error: code = NotFound desc = could not find container \"e6532e3c401020a81219beb2412f2270f19f9e279acb866ef31126847b841479\": container with ID starting with e6532e3c401020a81219beb2412f2270f19f9e279acb866ef31126847b841479 not found: ID does not exist" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.485020 4752 scope.go:117] "RemoveContainer" containerID="90b7a37ca22bb5b016157f01b974f084bba7cc0f601464805704ed5dbe96bc9c" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.485275 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90b7a37ca22bb5b016157f01b974f084bba7cc0f601464805704ed5dbe96bc9c"} err="failed to get container status \"90b7a37ca22bb5b016157f01b974f084bba7cc0f601464805704ed5dbe96bc9c\": rpc error: code = NotFound desc = could not find container \"90b7a37ca22bb5b016157f01b974f084bba7cc0f601464805704ed5dbe96bc9c\": container with ID starting with 90b7a37ca22bb5b016157f01b974f084bba7cc0f601464805704ed5dbe96bc9c not found: ID does not exist" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.489513 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.500471 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.522548 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.525352 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.528577 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.528752 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.533274 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.608724 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b07628b5-105b-4de5-a644-f6a37786572f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b07628b5-105b-4de5-a644-f6a37786572f\") " pod="openstack/nova-metadata-0" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.608824 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b07628b5-105b-4de5-a644-f6a37786572f-logs\") pod \"nova-metadata-0\" (UID: \"b07628b5-105b-4de5-a644-f6a37786572f\") " pod="openstack/nova-metadata-0" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.608973 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b07628b5-105b-4de5-a644-f6a37786572f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b07628b5-105b-4de5-a644-f6a37786572f\") " pod="openstack/nova-metadata-0" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.609001 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d08b87c3-3701-4177-b444-ec69e10c7ae1-config-data\") pod \"nova-scheduler-0\" (UID: \"d08b87c3-3701-4177-b444-ec69e10c7ae1\") " pod="openstack/nova-scheduler-0" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.609073 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b07628b5-105b-4de5-a644-f6a37786572f-config-data\") pod \"nova-metadata-0\" (UID: \"b07628b5-105b-4de5-a644-f6a37786572f\") " pod="openstack/nova-metadata-0" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.609157 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4v6c\" (UniqueName: \"kubernetes.io/projected/b07628b5-105b-4de5-a644-f6a37786572f-kube-api-access-b4v6c\") pod \"nova-metadata-0\" (UID: \"b07628b5-105b-4de5-a644-f6a37786572f\") " pod="openstack/nova-metadata-0" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.609220 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bs8x\" (UniqueName: \"kubernetes.io/projected/d08b87c3-3701-4177-b444-ec69e10c7ae1-kube-api-access-9bs8x\") pod \"nova-scheduler-0\" (UID: \"d08b87c3-3701-4177-b444-ec69e10c7ae1\") " pod="openstack/nova-scheduler-0" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.609295 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d08b87c3-3701-4177-b444-ec69e10c7ae1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d08b87c3-3701-4177-b444-ec69e10c7ae1\") " pod="openstack/nova-scheduler-0" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.710563 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d08b87c3-3701-4177-b444-ec69e10c7ae1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d08b87c3-3701-4177-b444-ec69e10c7ae1\") " pod="openstack/nova-scheduler-0" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.710639 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b07628b5-105b-4de5-a644-f6a37786572f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b07628b5-105b-4de5-a644-f6a37786572f\") " pod="openstack/nova-metadata-0" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.710662 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b07628b5-105b-4de5-a644-f6a37786572f-logs\") pod \"nova-metadata-0\" (UID: \"b07628b5-105b-4de5-a644-f6a37786572f\") " pod="openstack/nova-metadata-0" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.710749 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b07628b5-105b-4de5-a644-f6a37786572f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b07628b5-105b-4de5-a644-f6a37786572f\") " pod="openstack/nova-metadata-0" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.710775 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d08b87c3-3701-4177-b444-ec69e10c7ae1-config-data\") pod \"nova-scheduler-0\" (UID: \"d08b87c3-3701-4177-b444-ec69e10c7ae1\") " pod="openstack/nova-scheduler-0" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.710819 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b07628b5-105b-4de5-a644-f6a37786572f-config-data\") pod \"nova-metadata-0\" (UID: \"b07628b5-105b-4de5-a644-f6a37786572f\") " pod="openstack/nova-metadata-0" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.710883 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4v6c\" (UniqueName: \"kubernetes.io/projected/b07628b5-105b-4de5-a644-f6a37786572f-kube-api-access-b4v6c\") pod \"nova-metadata-0\" (UID: \"b07628b5-105b-4de5-a644-f6a37786572f\") " pod="openstack/nova-metadata-0" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.710910 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bs8x\" (UniqueName: \"kubernetes.io/projected/d08b87c3-3701-4177-b444-ec69e10c7ae1-kube-api-access-9bs8x\") pod \"nova-scheduler-0\" (UID: \"d08b87c3-3701-4177-b444-ec69e10c7ae1\") " pod="openstack/nova-scheduler-0" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.712203 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b07628b5-105b-4de5-a644-f6a37786572f-logs\") pod \"nova-metadata-0\" (UID: \"b07628b5-105b-4de5-a644-f6a37786572f\") " pod="openstack/nova-metadata-0" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.717815 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b07628b5-105b-4de5-a644-f6a37786572f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b07628b5-105b-4de5-a644-f6a37786572f\") " pod="openstack/nova-metadata-0" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.717841 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d08b87c3-3701-4177-b444-ec69e10c7ae1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d08b87c3-3701-4177-b444-ec69e10c7ae1\") " pod="openstack/nova-scheduler-0" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.718446 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b07628b5-105b-4de5-a644-f6a37786572f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b07628b5-105b-4de5-a644-f6a37786572f\") " pod="openstack/nova-metadata-0" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.718671 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b07628b5-105b-4de5-a644-f6a37786572f-config-data\") pod \"nova-metadata-0\" (UID: \"b07628b5-105b-4de5-a644-f6a37786572f\") " pod="openstack/nova-metadata-0" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.720477 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d08b87c3-3701-4177-b444-ec69e10c7ae1-config-data\") pod \"nova-scheduler-0\" (UID: \"d08b87c3-3701-4177-b444-ec69e10c7ae1\") " pod="openstack/nova-scheduler-0" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.727826 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bs8x\" (UniqueName: \"kubernetes.io/projected/d08b87c3-3701-4177-b444-ec69e10c7ae1-kube-api-access-9bs8x\") pod \"nova-scheduler-0\" (UID: \"d08b87c3-3701-4177-b444-ec69e10c7ae1\") " pod="openstack/nova-scheduler-0" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.727984 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4v6c\" (UniqueName: \"kubernetes.io/projected/b07628b5-105b-4de5-a644-f6a37786572f-kube-api-access-b4v6c\") pod \"nova-metadata-0\" (UID: \"b07628b5-105b-4de5-a644-f6a37786572f\") " pod="openstack/nova-metadata-0" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.813231 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 10:46:58 crc kubenswrapper[4752]: I0122 10:46:58.843049 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 10:46:59 crc kubenswrapper[4752]: I0122 10:46:59.110911 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3db375bb-394f-4c80-93c3-9145d730ddd8" path="/var/lib/kubelet/pods/3db375bb-394f-4c80-93c3-9145d730ddd8/volumes" Jan 22 10:46:59 crc kubenswrapper[4752]: I0122 10:46:59.113091 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf48cd2f-5dee-42bc-a49e-35ae01dde6be" path="/var/lib/kubelet/pods/bf48cd2f-5dee-42bc-a49e-35ae01dde6be/volumes" Jan 22 10:46:59 crc kubenswrapper[4752]: W0122 10:46:59.331180 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd08b87c3_3701_4177_b444_ec69e10c7ae1.slice/crio-3e5cccb722dec6ce97f87b190ef807f817818500ad62b2578990945ca6bf364d WatchSource:0}: Error finding container 3e5cccb722dec6ce97f87b190ef807f817818500ad62b2578990945ca6bf364d: Status 404 returned error can't find the container with id 3e5cccb722dec6ce97f87b190ef807f817818500ad62b2578990945ca6bf364d Jan 22 10:46:59 crc kubenswrapper[4752]: I0122 10:46:59.335084 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 10:46:59 crc kubenswrapper[4752]: I0122 10:46:59.396987 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:46:59 crc kubenswrapper[4752]: W0122 10:46:59.449600 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb07628b5_105b_4de5_a644_f6a37786572f.slice/crio-dd1b5c769f6f9bb100bdffc0eb52ffce2f8f7b621bed5991ead41b6ff703ef17 WatchSource:0}: Error finding container dd1b5c769f6f9bb100bdffc0eb52ffce2f8f7b621bed5991ead41b6ff703ef17: Status 404 returned error can't find the container with id dd1b5c769f6f9bb100bdffc0eb52ffce2f8f7b621bed5991ead41b6ff703ef17 Jan 22 10:46:59 crc kubenswrapper[4752]: I0122 10:46:59.851806 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 10:46:59 crc kubenswrapper[4752]: I0122 10:46:59.939101 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14e37871-b85a-4134-abd6-b8bfb0c6b696-combined-ca-bundle\") pod \"14e37871-b85a-4134-abd6-b8bfb0c6b696\" (UID: \"14e37871-b85a-4134-abd6-b8bfb0c6b696\") " Jan 22 10:46:59 crc kubenswrapper[4752]: I0122 10:46:59.939565 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wlhc\" (UniqueName: \"kubernetes.io/projected/14e37871-b85a-4134-abd6-b8bfb0c6b696-kube-api-access-2wlhc\") pod \"14e37871-b85a-4134-abd6-b8bfb0c6b696\" (UID: \"14e37871-b85a-4134-abd6-b8bfb0c6b696\") " Jan 22 10:46:59 crc kubenswrapper[4752]: I0122 10:46:59.939626 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14e37871-b85a-4134-abd6-b8bfb0c6b696-config-data\") pod \"14e37871-b85a-4134-abd6-b8bfb0c6b696\" (UID: \"14e37871-b85a-4134-abd6-b8bfb0c6b696\") " Jan 22 10:46:59 crc kubenswrapper[4752]: I0122 10:46:59.939710 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14e37871-b85a-4134-abd6-b8bfb0c6b696-logs\") pod \"14e37871-b85a-4134-abd6-b8bfb0c6b696\" (UID: \"14e37871-b85a-4134-abd6-b8bfb0c6b696\") " Jan 22 10:46:59 crc kubenswrapper[4752]: I0122 10:46:59.940282 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14e37871-b85a-4134-abd6-b8bfb0c6b696-logs" (OuterVolumeSpecName: "logs") pod "14e37871-b85a-4134-abd6-b8bfb0c6b696" (UID: "14e37871-b85a-4134-abd6-b8bfb0c6b696"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:46:59 crc kubenswrapper[4752]: I0122 10:46:59.943847 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14e37871-b85a-4134-abd6-b8bfb0c6b696-kube-api-access-2wlhc" (OuterVolumeSpecName: "kube-api-access-2wlhc") pod "14e37871-b85a-4134-abd6-b8bfb0c6b696" (UID: "14e37871-b85a-4134-abd6-b8bfb0c6b696"). InnerVolumeSpecName "kube-api-access-2wlhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:46:59 crc kubenswrapper[4752]: I0122 10:46:59.969450 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14e37871-b85a-4134-abd6-b8bfb0c6b696-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14e37871-b85a-4134-abd6-b8bfb0c6b696" (UID: "14e37871-b85a-4134-abd6-b8bfb0c6b696"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:46:59 crc kubenswrapper[4752]: I0122 10:46:59.980007 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14e37871-b85a-4134-abd6-b8bfb0c6b696-config-data" (OuterVolumeSpecName: "config-data") pod "14e37871-b85a-4134-abd6-b8bfb0c6b696" (UID: "14e37871-b85a-4134-abd6-b8bfb0c6b696"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.042526 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14e37871-b85a-4134-abd6-b8bfb0c6b696-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.042571 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wlhc\" (UniqueName: \"kubernetes.io/projected/14e37871-b85a-4134-abd6-b8bfb0c6b696-kube-api-access-2wlhc\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.042583 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14e37871-b85a-4134-abd6-b8bfb0c6b696-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.042596 4752 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14e37871-b85a-4134-abd6-b8bfb0c6b696-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.342886 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d08b87c3-3701-4177-b444-ec69e10c7ae1","Type":"ContainerStarted","Data":"0d16ddbe20da4ac7dacb16a2105fe52a16fd8a4c6b4cccf80ee8e8296869b651"} Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.343029 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d08b87c3-3701-4177-b444-ec69e10c7ae1","Type":"ContainerStarted","Data":"3e5cccb722dec6ce97f87b190ef807f817818500ad62b2578990945ca6bf364d"} Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.345583 4752 generic.go:334] "Generic (PLEG): container finished" podID="93e19f28-ee06-4011-99f5-76be05faf55f" containerID="bbcc8a94edc9a1e6d721521a1e7e5ade57527721e1f2e72f99f26fc66ce68656" exitCode=0 Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.345675 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-clql4" event={"ID":"93e19f28-ee06-4011-99f5-76be05faf55f","Type":"ContainerDied","Data":"bbcc8a94edc9a1e6d721521a1e7e5ade57527721e1f2e72f99f26fc66ce68656"} Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.348416 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b07628b5-105b-4de5-a644-f6a37786572f","Type":"ContainerStarted","Data":"6da35232cb5846de1a886eab007d17baa977a281f531dd6cb9dec1d70114817f"} Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.348449 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b07628b5-105b-4de5-a644-f6a37786572f","Type":"ContainerStarted","Data":"ed8b014d0917b641eb8214fd8206ccc83226ee8735a122f1890067a49ca147ae"} Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.348487 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b07628b5-105b-4de5-a644-f6a37786572f","Type":"ContainerStarted","Data":"dd1b5c769f6f9bb100bdffc0eb52ffce2f8f7b621bed5991ead41b6ff703ef17"} Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.353557 4752 generic.go:334] "Generic (PLEG): container finished" podID="14e37871-b85a-4134-abd6-b8bfb0c6b696" containerID="72e72a27219abe8f199c156a09bb44a9aa6859927c0f6bc403d491d34f86b122" exitCode=0 Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.353587 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14e37871-b85a-4134-abd6-b8bfb0c6b696","Type":"ContainerDied","Data":"72e72a27219abe8f199c156a09bb44a9aa6859927c0f6bc403d491d34f86b122"} Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.353624 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14e37871-b85a-4134-abd6-b8bfb0c6b696","Type":"ContainerDied","Data":"5888fe19c2c235cdcd6c7c6a8842071a4a5ed2cbfa350a4486e82b2dcb409829"} Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.353660 4752 scope.go:117] "RemoveContainer" containerID="72e72a27219abe8f199c156a09bb44a9aa6859927c0f6bc403d491d34f86b122" Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.353663 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.373152 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.373128012 podStartE2EDuration="2.373128012s" podCreationTimestamp="2026-01-22 10:46:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:47:00.371519871 +0000 UTC m=+1299.601462819" watchObservedRunningTime="2026-01-22 10:47:00.373128012 +0000 UTC m=+1299.603070930" Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.408630 4752 scope.go:117] "RemoveContainer" containerID="2028f02f63595797cbddc422ece706f563f3cde41f307bc3b174df26c83f4f5c" Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.439558 4752 scope.go:117] "RemoveContainer" containerID="72e72a27219abe8f199c156a09bb44a9aa6859927c0f6bc403d491d34f86b122" Jan 22 10:47:00 crc kubenswrapper[4752]: E0122 10:47:00.439987 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72e72a27219abe8f199c156a09bb44a9aa6859927c0f6bc403d491d34f86b122\": container with ID starting with 72e72a27219abe8f199c156a09bb44a9aa6859927c0f6bc403d491d34f86b122 not found: ID does not exist" containerID="72e72a27219abe8f199c156a09bb44a9aa6859927c0f6bc403d491d34f86b122" Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.440032 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72e72a27219abe8f199c156a09bb44a9aa6859927c0f6bc403d491d34f86b122"} err="failed to get container status \"72e72a27219abe8f199c156a09bb44a9aa6859927c0f6bc403d491d34f86b122\": rpc error: code = NotFound desc = could not find container \"72e72a27219abe8f199c156a09bb44a9aa6859927c0f6bc403d491d34f86b122\": container with ID starting with 72e72a27219abe8f199c156a09bb44a9aa6859927c0f6bc403d491d34f86b122 not found: ID does not exist" Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.440058 4752 scope.go:117] "RemoveContainer" containerID="2028f02f63595797cbddc422ece706f563f3cde41f307bc3b174df26c83f4f5c" Jan 22 10:47:00 crc kubenswrapper[4752]: E0122 10:47:00.445019 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2028f02f63595797cbddc422ece706f563f3cde41f307bc3b174df26c83f4f5c\": container with ID starting with 2028f02f63595797cbddc422ece706f563f3cde41f307bc3b174df26c83f4f5c not found: ID does not exist" containerID="2028f02f63595797cbddc422ece706f563f3cde41f307bc3b174df26c83f4f5c" Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.445096 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2028f02f63595797cbddc422ece706f563f3cde41f307bc3b174df26c83f4f5c"} err="failed to get container status \"2028f02f63595797cbddc422ece706f563f3cde41f307bc3b174df26c83f4f5c\": rpc error: code = NotFound desc = could not find container \"2028f02f63595797cbddc422ece706f563f3cde41f307bc3b174df26c83f4f5c\": container with ID starting with 2028f02f63595797cbddc422ece706f563f3cde41f307bc3b174df26c83f4f5c not found: ID does not exist" Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.446352 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.446328306 podStartE2EDuration="2.446328306s" podCreationTimestamp="2026-01-22 10:46:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:47:00.437950022 +0000 UTC m=+1299.667892930" watchObservedRunningTime="2026-01-22 10:47:00.446328306 +0000 UTC m=+1299.676271214" Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.505783 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.518101 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.530943 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 22 10:47:00 crc kubenswrapper[4752]: E0122 10:47:00.531452 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e37871-b85a-4134-abd6-b8bfb0c6b696" containerName="nova-api-api" Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.531477 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e37871-b85a-4134-abd6-b8bfb0c6b696" containerName="nova-api-api" Jan 22 10:47:00 crc kubenswrapper[4752]: E0122 10:47:00.531509 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e37871-b85a-4134-abd6-b8bfb0c6b696" containerName="nova-api-log" Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.531517 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e37871-b85a-4134-abd6-b8bfb0c6b696" containerName="nova-api-log" Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.531744 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="14e37871-b85a-4134-abd6-b8bfb0c6b696" containerName="nova-api-log" Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.531772 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="14e37871-b85a-4134-abd6-b8bfb0c6b696" containerName="nova-api-api" Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.533018 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.535329 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.546289 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.658492 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4df40f3f-4152-45ab-9509-e9a93221a9c5-config-data\") pod \"nova-api-0\" (UID: \"4df40f3f-4152-45ab-9509-e9a93221a9c5\") " pod="openstack/nova-api-0" Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.658609 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4df40f3f-4152-45ab-9509-e9a93221a9c5-logs\") pod \"nova-api-0\" (UID: \"4df40f3f-4152-45ab-9509-e9a93221a9c5\") " pod="openstack/nova-api-0" Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.658654 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwvfb\" (UniqueName: \"kubernetes.io/projected/4df40f3f-4152-45ab-9509-e9a93221a9c5-kube-api-access-lwvfb\") pod \"nova-api-0\" (UID: \"4df40f3f-4152-45ab-9509-e9a93221a9c5\") " pod="openstack/nova-api-0" Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.658706 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df40f3f-4152-45ab-9509-e9a93221a9c5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4df40f3f-4152-45ab-9509-e9a93221a9c5\") " pod="openstack/nova-api-0" Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.760172 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df40f3f-4152-45ab-9509-e9a93221a9c5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4df40f3f-4152-45ab-9509-e9a93221a9c5\") " pod="openstack/nova-api-0" Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.760762 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4df40f3f-4152-45ab-9509-e9a93221a9c5-config-data\") pod \"nova-api-0\" (UID: \"4df40f3f-4152-45ab-9509-e9a93221a9c5\") " pod="openstack/nova-api-0" Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.760978 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4df40f3f-4152-45ab-9509-e9a93221a9c5-logs\") pod \"nova-api-0\" (UID: \"4df40f3f-4152-45ab-9509-e9a93221a9c5\") " pod="openstack/nova-api-0" Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.761042 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwvfb\" (UniqueName: \"kubernetes.io/projected/4df40f3f-4152-45ab-9509-e9a93221a9c5-kube-api-access-lwvfb\") pod \"nova-api-0\" (UID: \"4df40f3f-4152-45ab-9509-e9a93221a9c5\") " pod="openstack/nova-api-0" Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.761897 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4df40f3f-4152-45ab-9509-e9a93221a9c5-logs\") pod \"nova-api-0\" (UID: \"4df40f3f-4152-45ab-9509-e9a93221a9c5\") " pod="openstack/nova-api-0" Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.765761 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df40f3f-4152-45ab-9509-e9a93221a9c5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4df40f3f-4152-45ab-9509-e9a93221a9c5\") " pod="openstack/nova-api-0" Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.766645 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4df40f3f-4152-45ab-9509-e9a93221a9c5-config-data\") pod \"nova-api-0\" (UID: \"4df40f3f-4152-45ab-9509-e9a93221a9c5\") " pod="openstack/nova-api-0" Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.785692 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwvfb\" (UniqueName: \"kubernetes.io/projected/4df40f3f-4152-45ab-9509-e9a93221a9c5-kube-api-access-lwvfb\") pod \"nova-api-0\" (UID: \"4df40f3f-4152-45ab-9509-e9a93221a9c5\") " pod="openstack/nova-api-0" Jan 22 10:47:00 crc kubenswrapper[4752]: I0122 10:47:00.859021 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 10:47:01 crc kubenswrapper[4752]: I0122 10:47:01.185383 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14e37871-b85a-4134-abd6-b8bfb0c6b696" path="/var/lib/kubelet/pods/14e37871-b85a-4134-abd6-b8bfb0c6b696/volumes" Jan 22 10:47:01 crc kubenswrapper[4752]: I0122 10:47:01.384681 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 10:47:01 crc kubenswrapper[4752]: I0122 10:47:01.824509 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-clql4" Jan 22 10:47:01 crc kubenswrapper[4752]: I0122 10:47:01.934959 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxn2j\" (UniqueName: \"kubernetes.io/projected/93e19f28-ee06-4011-99f5-76be05faf55f-kube-api-access-wxn2j\") pod \"93e19f28-ee06-4011-99f5-76be05faf55f\" (UID: \"93e19f28-ee06-4011-99f5-76be05faf55f\") " Jan 22 10:47:01 crc kubenswrapper[4752]: I0122 10:47:01.935036 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93e19f28-ee06-4011-99f5-76be05faf55f-scripts\") pod \"93e19f28-ee06-4011-99f5-76be05faf55f\" (UID: \"93e19f28-ee06-4011-99f5-76be05faf55f\") " Jan 22 10:47:01 crc kubenswrapper[4752]: I0122 10:47:01.935148 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93e19f28-ee06-4011-99f5-76be05faf55f-config-data\") pod \"93e19f28-ee06-4011-99f5-76be05faf55f\" (UID: \"93e19f28-ee06-4011-99f5-76be05faf55f\") " Jan 22 10:47:01 crc kubenswrapper[4752]: I0122 10:47:01.935297 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93e19f28-ee06-4011-99f5-76be05faf55f-combined-ca-bundle\") pod \"93e19f28-ee06-4011-99f5-76be05faf55f\" (UID: \"93e19f28-ee06-4011-99f5-76be05faf55f\") " Jan 22 10:47:01 crc kubenswrapper[4752]: I0122 10:47:01.940962 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93e19f28-ee06-4011-99f5-76be05faf55f-scripts" (OuterVolumeSpecName: "scripts") pod "93e19f28-ee06-4011-99f5-76be05faf55f" (UID: "93e19f28-ee06-4011-99f5-76be05faf55f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:47:01 crc kubenswrapper[4752]: I0122 10:47:01.941516 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93e19f28-ee06-4011-99f5-76be05faf55f-kube-api-access-wxn2j" (OuterVolumeSpecName: "kube-api-access-wxn2j") pod "93e19f28-ee06-4011-99f5-76be05faf55f" (UID: "93e19f28-ee06-4011-99f5-76be05faf55f"). InnerVolumeSpecName "kube-api-access-wxn2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:47:01 crc kubenswrapper[4752]: I0122 10:47:01.967897 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93e19f28-ee06-4011-99f5-76be05faf55f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93e19f28-ee06-4011-99f5-76be05faf55f" (UID: "93e19f28-ee06-4011-99f5-76be05faf55f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:47:01 crc kubenswrapper[4752]: I0122 10:47:01.970694 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93e19f28-ee06-4011-99f5-76be05faf55f-config-data" (OuterVolumeSpecName: "config-data") pod "93e19f28-ee06-4011-99f5-76be05faf55f" (UID: "93e19f28-ee06-4011-99f5-76be05faf55f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:47:02 crc kubenswrapper[4752]: I0122 10:47:02.041067 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93e19f28-ee06-4011-99f5-76be05faf55f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:02 crc kubenswrapper[4752]: I0122 10:47:02.042912 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxn2j\" (UniqueName: \"kubernetes.io/projected/93e19f28-ee06-4011-99f5-76be05faf55f-kube-api-access-wxn2j\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:02 crc kubenswrapper[4752]: I0122 10:47:02.043149 4752 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93e19f28-ee06-4011-99f5-76be05faf55f-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:02 crc kubenswrapper[4752]: I0122 10:47:02.043550 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93e19f28-ee06-4011-99f5-76be05faf55f-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:02 crc kubenswrapper[4752]: I0122 10:47:02.402448 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-clql4" event={"ID":"93e19f28-ee06-4011-99f5-76be05faf55f","Type":"ContainerDied","Data":"6bb458393477c6303f62da84905a0c8940fc8fd20bb31a1eb615a1510ae3d7c0"} Jan 22 10:47:02 crc kubenswrapper[4752]: I0122 10:47:02.402504 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bb458393477c6303f62da84905a0c8940fc8fd20bb31a1eb615a1510ae3d7c0" Jan 22 10:47:02 crc kubenswrapper[4752]: I0122 10:47:02.402600 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-clql4" Jan 22 10:47:02 crc kubenswrapper[4752]: I0122 10:47:02.409177 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4df40f3f-4152-45ab-9509-e9a93221a9c5","Type":"ContainerStarted","Data":"9a99ceb47d7b122b62b40bfbddc13eaba59d64f7b76c287905372ef34450803d"} Jan 22 10:47:02 crc kubenswrapper[4752]: I0122 10:47:02.409212 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4df40f3f-4152-45ab-9509-e9a93221a9c5","Type":"ContainerStarted","Data":"66167773bc45dd75da683b84286928ab05a752ad9165a49dd8f18ba308091f87"} Jan 22 10:47:02 crc kubenswrapper[4752]: I0122 10:47:02.409226 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4df40f3f-4152-45ab-9509-e9a93221a9c5","Type":"ContainerStarted","Data":"4122330ed1db6ae0c494b9e49c1149987aaaf5cf7153860d1b78b94ed09a7f29"} Jan 22 10:47:02 crc kubenswrapper[4752]: I0122 10:47:02.433664 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.433639954 podStartE2EDuration="2.433639954s" podCreationTimestamp="2026-01-22 10:47:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:47:02.431301464 +0000 UTC m=+1301.661244372" watchObservedRunningTime="2026-01-22 10:47:02.433639954 +0000 UTC m=+1301.663582862" Jan 22 10:47:02 crc kubenswrapper[4752]: I0122 10:47:02.544950 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 22 10:47:02 crc kubenswrapper[4752]: E0122 10:47:02.545409 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93e19f28-ee06-4011-99f5-76be05faf55f" containerName="nova-cell1-conductor-db-sync" Jan 22 10:47:02 crc kubenswrapper[4752]: I0122 10:47:02.545430 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="93e19f28-ee06-4011-99f5-76be05faf55f" containerName="nova-cell1-conductor-db-sync" Jan 22 10:47:02 crc kubenswrapper[4752]: I0122 10:47:02.545710 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="93e19f28-ee06-4011-99f5-76be05faf55f" containerName="nova-cell1-conductor-db-sync" Jan 22 10:47:02 crc kubenswrapper[4752]: I0122 10:47:02.546497 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 22 10:47:02 crc kubenswrapper[4752]: I0122 10:47:02.549331 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 22 10:47:02 crc kubenswrapper[4752]: I0122 10:47:02.556203 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 22 10:47:02 crc kubenswrapper[4752]: I0122 10:47:02.658295 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eedd8ac-3f8b-4c96-9065-0bf2dbb1dea8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5eedd8ac-3f8b-4c96-9065-0bf2dbb1dea8\") " pod="openstack/nova-cell1-conductor-0" Jan 22 10:47:02 crc kubenswrapper[4752]: I0122 10:47:02.658736 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppqvx\" (UniqueName: \"kubernetes.io/projected/5eedd8ac-3f8b-4c96-9065-0bf2dbb1dea8-kube-api-access-ppqvx\") pod \"nova-cell1-conductor-0\" (UID: \"5eedd8ac-3f8b-4c96-9065-0bf2dbb1dea8\") " pod="openstack/nova-cell1-conductor-0" Jan 22 10:47:02 crc kubenswrapper[4752]: I0122 10:47:02.659084 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eedd8ac-3f8b-4c96-9065-0bf2dbb1dea8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5eedd8ac-3f8b-4c96-9065-0bf2dbb1dea8\") " pod="openstack/nova-cell1-conductor-0" Jan 22 10:47:02 crc kubenswrapper[4752]: I0122 10:47:02.761387 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eedd8ac-3f8b-4c96-9065-0bf2dbb1dea8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5eedd8ac-3f8b-4c96-9065-0bf2dbb1dea8\") " pod="openstack/nova-cell1-conductor-0" Jan 22 10:47:02 crc kubenswrapper[4752]: I0122 10:47:02.761466 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppqvx\" (UniqueName: \"kubernetes.io/projected/5eedd8ac-3f8b-4c96-9065-0bf2dbb1dea8-kube-api-access-ppqvx\") pod \"nova-cell1-conductor-0\" (UID: \"5eedd8ac-3f8b-4c96-9065-0bf2dbb1dea8\") " pod="openstack/nova-cell1-conductor-0" Jan 22 10:47:02 crc kubenswrapper[4752]: I0122 10:47:02.761516 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eedd8ac-3f8b-4c96-9065-0bf2dbb1dea8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5eedd8ac-3f8b-4c96-9065-0bf2dbb1dea8\") " pod="openstack/nova-cell1-conductor-0" Jan 22 10:47:02 crc kubenswrapper[4752]: I0122 10:47:02.768550 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eedd8ac-3f8b-4c96-9065-0bf2dbb1dea8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5eedd8ac-3f8b-4c96-9065-0bf2dbb1dea8\") " pod="openstack/nova-cell1-conductor-0" Jan 22 10:47:02 crc kubenswrapper[4752]: I0122 10:47:02.768884 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eedd8ac-3f8b-4c96-9065-0bf2dbb1dea8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5eedd8ac-3f8b-4c96-9065-0bf2dbb1dea8\") " pod="openstack/nova-cell1-conductor-0" Jan 22 10:47:02 crc kubenswrapper[4752]: I0122 10:47:02.782028 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppqvx\" (UniqueName: \"kubernetes.io/projected/5eedd8ac-3f8b-4c96-9065-0bf2dbb1dea8-kube-api-access-ppqvx\") pod \"nova-cell1-conductor-0\" (UID: \"5eedd8ac-3f8b-4c96-9065-0bf2dbb1dea8\") " pod="openstack/nova-cell1-conductor-0" Jan 22 10:47:02 crc kubenswrapper[4752]: I0122 10:47:02.865378 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 22 10:47:03 crc kubenswrapper[4752]: I0122 10:47:03.392643 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 22 10:47:03 crc kubenswrapper[4752]: I0122 10:47:03.422984 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5eedd8ac-3f8b-4c96-9065-0bf2dbb1dea8","Type":"ContainerStarted","Data":"683b0813e8987af06546cd29615696f2156fd3201cce13c2ace3a95bba81e3a7"} Jan 22 10:47:03 crc kubenswrapper[4752]: I0122 10:47:03.813818 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 22 10:47:03 crc kubenswrapper[4752]: I0122 10:47:03.844161 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 10:47:03 crc kubenswrapper[4752]: I0122 10:47:03.844216 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 10:47:04 crc kubenswrapper[4752]: I0122 10:47:04.436482 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5eedd8ac-3f8b-4c96-9065-0bf2dbb1dea8","Type":"ContainerStarted","Data":"2a64896f722b484fa379eacebbd9a7bc0b81fe912b262e9980fcbdeb38135841"} Jan 22 10:47:04 crc kubenswrapper[4752]: I0122 10:47:04.436606 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 22 10:47:04 crc kubenswrapper[4752]: I0122 10:47:04.465293 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.4652652059999998 podStartE2EDuration="2.465265206s" podCreationTimestamp="2026-01-22 10:47:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:47:04.453085944 +0000 UTC m=+1303.683028852" watchObservedRunningTime="2026-01-22 10:47:04.465265206 +0000 UTC m=+1303.695208114" Jan 22 10:47:06 crc kubenswrapper[4752]: I0122 10:47:06.592043 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 22 10:47:08 crc kubenswrapper[4752]: I0122 10:47:08.814367 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 22 10:47:08 crc kubenswrapper[4752]: I0122 10:47:08.844529 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 22 10:47:08 crc kubenswrapper[4752]: I0122 10:47:08.844764 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 22 10:47:08 crc kubenswrapper[4752]: I0122 10:47:08.845117 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 22 10:47:09 crc kubenswrapper[4752]: I0122 10:47:09.527305 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 22 10:47:09 crc kubenswrapper[4752]: I0122 10:47:09.860120 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b07628b5-105b-4de5-a644-f6a37786572f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.218:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 10:47:09 crc kubenswrapper[4752]: I0122 10:47:09.860413 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b07628b5-105b-4de5-a644-f6a37786572f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.218:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 10:47:10 crc kubenswrapper[4752]: I0122 10:47:10.860350 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 10:47:10 crc kubenswrapper[4752]: I0122 10:47:10.860922 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 10:47:11 crc kubenswrapper[4752]: I0122 10:47:11.343286 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 10:47:11 crc kubenswrapper[4752]: I0122 10:47:11.343474 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="37c7ae68-7fae-44ab-bb3a-f838bdc15bea" containerName="kube-state-metrics" containerID="cri-o://32669c604b914124005a7643850c663237813186676b523d6c7223541b4d984b" gracePeriod=30 Jan 22 10:47:11 crc kubenswrapper[4752]: I0122 10:47:11.520513 4752 generic.go:334] "Generic (PLEG): container finished" podID="37c7ae68-7fae-44ab-bb3a-f838bdc15bea" containerID="32669c604b914124005a7643850c663237813186676b523d6c7223541b4d984b" exitCode=2 Jan 22 10:47:11 crc kubenswrapper[4752]: I0122 10:47:11.520559 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"37c7ae68-7fae-44ab-bb3a-f838bdc15bea","Type":"ContainerDied","Data":"32669c604b914124005a7643850c663237813186676b523d6c7223541b4d984b"} Jan 22 10:47:11 crc kubenswrapper[4752]: I0122 10:47:11.954157 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4df40f3f-4152-45ab-9509-e9a93221a9c5" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.219:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 10:47:11 crc kubenswrapper[4752]: I0122 10:47:11.954707 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4df40f3f-4152-45ab-9509-e9a93221a9c5" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.219:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 10:47:11 crc kubenswrapper[4752]: I0122 10:47:11.966831 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 10:47:12 crc kubenswrapper[4752]: I0122 10:47:12.012910 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p7sw\" (UniqueName: \"kubernetes.io/projected/37c7ae68-7fae-44ab-bb3a-f838bdc15bea-kube-api-access-7p7sw\") pod \"37c7ae68-7fae-44ab-bb3a-f838bdc15bea\" (UID: \"37c7ae68-7fae-44ab-bb3a-f838bdc15bea\") " Jan 22 10:47:12 crc kubenswrapper[4752]: I0122 10:47:12.025154 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37c7ae68-7fae-44ab-bb3a-f838bdc15bea-kube-api-access-7p7sw" (OuterVolumeSpecName: "kube-api-access-7p7sw") pod "37c7ae68-7fae-44ab-bb3a-f838bdc15bea" (UID: "37c7ae68-7fae-44ab-bb3a-f838bdc15bea"). InnerVolumeSpecName "kube-api-access-7p7sw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:47:12 crc kubenswrapper[4752]: I0122 10:47:12.115363 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p7sw\" (UniqueName: \"kubernetes.io/projected/37c7ae68-7fae-44ab-bb3a-f838bdc15bea-kube-api-access-7p7sw\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:12 crc kubenswrapper[4752]: I0122 10:47:12.534411 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"37c7ae68-7fae-44ab-bb3a-f838bdc15bea","Type":"ContainerDied","Data":"560b3410955d2425ad355e6a84f82f9917f0188e7e81c37060502857ee8fc0bc"} Jan 22 10:47:12 crc kubenswrapper[4752]: I0122 10:47:12.534738 4752 scope.go:117] "RemoveContainer" containerID="32669c604b914124005a7643850c663237813186676b523d6c7223541b4d984b" Jan 22 10:47:12 crc kubenswrapper[4752]: I0122 10:47:12.534933 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 10:47:12 crc kubenswrapper[4752]: I0122 10:47:12.585572 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 10:47:12 crc kubenswrapper[4752]: I0122 10:47:12.594645 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 10:47:12 crc kubenswrapper[4752]: I0122 10:47:12.622914 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 10:47:12 crc kubenswrapper[4752]: E0122 10:47:12.623418 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37c7ae68-7fae-44ab-bb3a-f838bdc15bea" containerName="kube-state-metrics" Jan 22 10:47:12 crc kubenswrapper[4752]: I0122 10:47:12.623437 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="37c7ae68-7fae-44ab-bb3a-f838bdc15bea" containerName="kube-state-metrics" Jan 22 10:47:12 crc kubenswrapper[4752]: I0122 10:47:12.623738 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="37c7ae68-7fae-44ab-bb3a-f838bdc15bea" containerName="kube-state-metrics" Jan 22 10:47:12 crc kubenswrapper[4752]: I0122 10:47:12.624671 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 10:47:12 crc kubenswrapper[4752]: I0122 10:47:12.629121 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 22 10:47:12 crc kubenswrapper[4752]: I0122 10:47:12.629282 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 22 10:47:12 crc kubenswrapper[4752]: I0122 10:47:12.639618 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 10:47:12 crc kubenswrapper[4752]: I0122 10:47:12.735508 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/870fa35b-ba4a-4e94-8860-12b11673d09c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"870fa35b-ba4a-4e94-8860-12b11673d09c\") " pod="openstack/kube-state-metrics-0" Jan 22 10:47:12 crc kubenswrapper[4752]: I0122 10:47:12.735657 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/870fa35b-ba4a-4e94-8860-12b11673d09c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"870fa35b-ba4a-4e94-8860-12b11673d09c\") " pod="openstack/kube-state-metrics-0" Jan 22 10:47:12 crc kubenswrapper[4752]: I0122 10:47:12.735700 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp7f9\" (UniqueName: \"kubernetes.io/projected/870fa35b-ba4a-4e94-8860-12b11673d09c-kube-api-access-bp7f9\") pod \"kube-state-metrics-0\" (UID: \"870fa35b-ba4a-4e94-8860-12b11673d09c\") " pod="openstack/kube-state-metrics-0" Jan 22 10:47:12 crc kubenswrapper[4752]: I0122 10:47:12.735733 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/870fa35b-ba4a-4e94-8860-12b11673d09c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"870fa35b-ba4a-4e94-8860-12b11673d09c\") " pod="openstack/kube-state-metrics-0" Jan 22 10:47:12 crc kubenswrapper[4752]: I0122 10:47:12.837311 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp7f9\" (UniqueName: \"kubernetes.io/projected/870fa35b-ba4a-4e94-8860-12b11673d09c-kube-api-access-bp7f9\") pod \"kube-state-metrics-0\" (UID: \"870fa35b-ba4a-4e94-8860-12b11673d09c\") " pod="openstack/kube-state-metrics-0" Jan 22 10:47:12 crc kubenswrapper[4752]: I0122 10:47:12.837357 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/870fa35b-ba4a-4e94-8860-12b11673d09c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"870fa35b-ba4a-4e94-8860-12b11673d09c\") " pod="openstack/kube-state-metrics-0" Jan 22 10:47:12 crc kubenswrapper[4752]: I0122 10:47:12.837480 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/870fa35b-ba4a-4e94-8860-12b11673d09c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"870fa35b-ba4a-4e94-8860-12b11673d09c\") " pod="openstack/kube-state-metrics-0" Jan 22 10:47:12 crc kubenswrapper[4752]: I0122 10:47:12.837534 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/870fa35b-ba4a-4e94-8860-12b11673d09c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"870fa35b-ba4a-4e94-8860-12b11673d09c\") " pod="openstack/kube-state-metrics-0" Jan 22 10:47:12 crc kubenswrapper[4752]: I0122 10:47:12.843441 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/870fa35b-ba4a-4e94-8860-12b11673d09c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"870fa35b-ba4a-4e94-8860-12b11673d09c\") " pod="openstack/kube-state-metrics-0" Jan 22 10:47:12 crc kubenswrapper[4752]: I0122 10:47:12.844272 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/870fa35b-ba4a-4e94-8860-12b11673d09c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"870fa35b-ba4a-4e94-8860-12b11673d09c\") " pod="openstack/kube-state-metrics-0" Jan 22 10:47:12 crc kubenswrapper[4752]: I0122 10:47:12.844769 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/870fa35b-ba4a-4e94-8860-12b11673d09c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"870fa35b-ba4a-4e94-8860-12b11673d09c\") " pod="openstack/kube-state-metrics-0" Jan 22 10:47:12 crc kubenswrapper[4752]: I0122 10:47:12.864342 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp7f9\" (UniqueName: \"kubernetes.io/projected/870fa35b-ba4a-4e94-8860-12b11673d09c-kube-api-access-bp7f9\") pod \"kube-state-metrics-0\" (UID: \"870fa35b-ba4a-4e94-8860-12b11673d09c\") " pod="openstack/kube-state-metrics-0" Jan 22 10:47:12 crc kubenswrapper[4752]: I0122 10:47:12.909014 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 22 10:47:12 crc kubenswrapper[4752]: I0122 10:47:12.979101 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 10:47:13 crc kubenswrapper[4752]: I0122 10:47:13.125642 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37c7ae68-7fae-44ab-bb3a-f838bdc15bea" path="/var/lib/kubelet/pods/37c7ae68-7fae-44ab-bb3a-f838bdc15bea/volumes" Jan 22 10:47:13 crc kubenswrapper[4752]: I0122 10:47:13.478333 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 10:47:13 crc kubenswrapper[4752]: W0122 10:47:13.483347 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod870fa35b_ba4a_4e94_8860_12b11673d09c.slice/crio-463aba7fa9cf046c6b5388a27018998fb2bbbf37ca8659dc36f027ab8e080f07 WatchSource:0}: Error finding container 463aba7fa9cf046c6b5388a27018998fb2bbbf37ca8659dc36f027ab8e080f07: Status 404 returned error can't find the container with id 463aba7fa9cf046c6b5388a27018998fb2bbbf37ca8659dc36f027ab8e080f07 Jan 22 10:47:13 crc kubenswrapper[4752]: I0122 10:47:13.549139 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"870fa35b-ba4a-4e94-8860-12b11673d09c","Type":"ContainerStarted","Data":"463aba7fa9cf046c6b5388a27018998fb2bbbf37ca8659dc36f027ab8e080f07"} Jan 22 10:47:13 crc kubenswrapper[4752]: I0122 10:47:13.759418 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:47:13 crc kubenswrapper[4752]: I0122 10:47:13.759991 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="680367ae-6a30-4fc4-8c40-f03746ef1288" containerName="ceilometer-central-agent" containerID="cri-o://e19e55f72cff0ca0920b2c579961d08ebe4cd32179d382ea8142b0e19191495d" gracePeriod=30 Jan 22 10:47:13 crc kubenswrapper[4752]: I0122 10:47:13.760100 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="680367ae-6a30-4fc4-8c40-f03746ef1288" containerName="sg-core" containerID="cri-o://3633a1daee0c105af074e3ab517add0dc7e16751f9a406715cf0a2e598221c0e" gracePeriod=30 Jan 22 10:47:13 crc kubenswrapper[4752]: I0122 10:47:13.760124 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="680367ae-6a30-4fc4-8c40-f03746ef1288" containerName="ceilometer-notification-agent" containerID="cri-o://153ce50797dd5f691eec3265271048a3f4d454f34455a9a1f7bf47df2268f3ae" gracePeriod=30 Jan 22 10:47:13 crc kubenswrapper[4752]: I0122 10:47:13.760130 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="680367ae-6a30-4fc4-8c40-f03746ef1288" containerName="proxy-httpd" containerID="cri-o://39e6643d2a300ec0a93441b03b26821bf8d8d594b5faffce38a3496a9acb2ff5" gracePeriod=30 Jan 22 10:47:14 crc kubenswrapper[4752]: I0122 10:47:14.564817 4752 generic.go:334] "Generic (PLEG): container finished" podID="680367ae-6a30-4fc4-8c40-f03746ef1288" containerID="39e6643d2a300ec0a93441b03b26821bf8d8d594b5faffce38a3496a9acb2ff5" exitCode=0 Jan 22 10:47:14 crc kubenswrapper[4752]: I0122 10:47:14.565327 4752 generic.go:334] "Generic (PLEG): container finished" podID="680367ae-6a30-4fc4-8c40-f03746ef1288" containerID="3633a1daee0c105af074e3ab517add0dc7e16751f9a406715cf0a2e598221c0e" exitCode=2 Jan 22 10:47:14 crc kubenswrapper[4752]: I0122 10:47:14.564900 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"680367ae-6a30-4fc4-8c40-f03746ef1288","Type":"ContainerDied","Data":"39e6643d2a300ec0a93441b03b26821bf8d8d594b5faffce38a3496a9acb2ff5"} Jan 22 10:47:14 crc kubenswrapper[4752]: I0122 10:47:14.565420 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"680367ae-6a30-4fc4-8c40-f03746ef1288","Type":"ContainerDied","Data":"3633a1daee0c105af074e3ab517add0dc7e16751f9a406715cf0a2e598221c0e"} Jan 22 10:47:14 crc kubenswrapper[4752]: I0122 10:47:14.567232 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"870fa35b-ba4a-4e94-8860-12b11673d09c","Type":"ContainerStarted","Data":"8a01de3762d1b44f4ccc7a370d9f579d1eeed8b98c48f1f50b44a46ea67ba662"} Jan 22 10:47:14 crc kubenswrapper[4752]: I0122 10:47:14.567400 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 22 10:47:14 crc kubenswrapper[4752]: I0122 10:47:14.592038 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.221835685 podStartE2EDuration="2.592012471s" podCreationTimestamp="2026-01-22 10:47:12 +0000 UTC" firstStartedPulling="2026-01-22 10:47:13.48574283 +0000 UTC m=+1312.715685738" lastFinishedPulling="2026-01-22 10:47:13.855919616 +0000 UTC m=+1313.085862524" observedRunningTime="2026-01-22 10:47:14.588237244 +0000 UTC m=+1313.818180152" watchObservedRunningTime="2026-01-22 10:47:14.592012471 +0000 UTC m=+1313.821955389" Jan 22 10:47:14 crc kubenswrapper[4752]: E0122 10:47:14.592015 4752 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod680367ae_6a30_4fc4_8c40_f03746ef1288.slice/crio-conmon-e19e55f72cff0ca0920b2c579961d08ebe4cd32179d382ea8142b0e19191495d.scope\": RecentStats: unable to find data in memory cache]" Jan 22 10:47:15 crc kubenswrapper[4752]: I0122 10:47:15.587713 4752 generic.go:334] "Generic (PLEG): container finished" podID="680367ae-6a30-4fc4-8c40-f03746ef1288" containerID="e19e55f72cff0ca0920b2c579961d08ebe4cd32179d382ea8142b0e19191495d" exitCode=0 Jan 22 10:47:15 crc kubenswrapper[4752]: I0122 10:47:15.587890 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"680367ae-6a30-4fc4-8c40-f03746ef1288","Type":"ContainerDied","Data":"e19e55f72cff0ca0920b2c579961d08ebe4cd32179d382ea8142b0e19191495d"} Jan 22 10:47:18 crc kubenswrapper[4752]: I0122 10:47:18.850615 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 22 10:47:18 crc kubenswrapper[4752]: I0122 10:47:18.856907 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 22 10:47:18 crc kubenswrapper[4752]: I0122 10:47:18.860472 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 22 10:47:19 crc kubenswrapper[4752]: I0122 10:47:19.637338 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 22 10:47:20 crc kubenswrapper[4752]: I0122 10:47:20.578346 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:47:20 crc kubenswrapper[4752]: I0122 10:47:20.640439 4752 generic.go:334] "Generic (PLEG): container finished" podID="42ef5e1c-28bf-44f3-b614-7ce57b1daf4c" containerID="d7088b18489a94fcd40eeaefab252c773eefed8a34508f1cb786a25c66442c30" exitCode=137 Jan 22 10:47:20 crc kubenswrapper[4752]: I0122 10:47:20.640487 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:47:20 crc kubenswrapper[4752]: I0122 10:47:20.640527 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"42ef5e1c-28bf-44f3-b614-7ce57b1daf4c","Type":"ContainerDied","Data":"d7088b18489a94fcd40eeaefab252c773eefed8a34508f1cb786a25c66442c30"} Jan 22 10:47:20 crc kubenswrapper[4752]: I0122 10:47:20.640562 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"42ef5e1c-28bf-44f3-b614-7ce57b1daf4c","Type":"ContainerDied","Data":"8f538dcae8452e535925d2584dd73e4883444a1c34e1353c0cf400a87efb7f6f"} Jan 22 10:47:20 crc kubenswrapper[4752]: I0122 10:47:20.640580 4752 scope.go:117] "RemoveContainer" containerID="d7088b18489a94fcd40eeaefab252c773eefed8a34508f1cb786a25c66442c30" Jan 22 10:47:20 crc kubenswrapper[4752]: I0122 10:47:20.673614 4752 scope.go:117] "RemoveContainer" containerID="d7088b18489a94fcd40eeaefab252c773eefed8a34508f1cb786a25c66442c30" Jan 22 10:47:20 crc kubenswrapper[4752]: E0122 10:47:20.674142 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7088b18489a94fcd40eeaefab252c773eefed8a34508f1cb786a25c66442c30\": container with ID starting with d7088b18489a94fcd40eeaefab252c773eefed8a34508f1cb786a25c66442c30 not found: ID does not exist" containerID="d7088b18489a94fcd40eeaefab252c773eefed8a34508f1cb786a25c66442c30" Jan 22 10:47:20 crc kubenswrapper[4752]: I0122 10:47:20.674180 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7088b18489a94fcd40eeaefab252c773eefed8a34508f1cb786a25c66442c30"} err="failed to get container status \"d7088b18489a94fcd40eeaefab252c773eefed8a34508f1cb786a25c66442c30\": rpc error: code = NotFound desc = could not find container \"d7088b18489a94fcd40eeaefab252c773eefed8a34508f1cb786a25c66442c30\": container with ID starting with d7088b18489a94fcd40eeaefab252c773eefed8a34508f1cb786a25c66442c30 not found: ID does not exist" Jan 22 10:47:20 crc kubenswrapper[4752]: I0122 10:47:20.697247 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ef5e1c-28bf-44f3-b614-7ce57b1daf4c-combined-ca-bundle\") pod \"42ef5e1c-28bf-44f3-b614-7ce57b1daf4c\" (UID: \"42ef5e1c-28bf-44f3-b614-7ce57b1daf4c\") " Jan 22 10:47:20 crc kubenswrapper[4752]: I0122 10:47:20.697395 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ef5e1c-28bf-44f3-b614-7ce57b1daf4c-config-data\") pod \"42ef5e1c-28bf-44f3-b614-7ce57b1daf4c\" (UID: \"42ef5e1c-28bf-44f3-b614-7ce57b1daf4c\") " Jan 22 10:47:20 crc kubenswrapper[4752]: I0122 10:47:20.697510 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwl2j\" (UniqueName: \"kubernetes.io/projected/42ef5e1c-28bf-44f3-b614-7ce57b1daf4c-kube-api-access-hwl2j\") pod \"42ef5e1c-28bf-44f3-b614-7ce57b1daf4c\" (UID: \"42ef5e1c-28bf-44f3-b614-7ce57b1daf4c\") " Jan 22 10:47:20 crc kubenswrapper[4752]: I0122 10:47:20.702915 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42ef5e1c-28bf-44f3-b614-7ce57b1daf4c-kube-api-access-hwl2j" (OuterVolumeSpecName: "kube-api-access-hwl2j") pod "42ef5e1c-28bf-44f3-b614-7ce57b1daf4c" (UID: "42ef5e1c-28bf-44f3-b614-7ce57b1daf4c"). InnerVolumeSpecName "kube-api-access-hwl2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:47:20 crc kubenswrapper[4752]: I0122 10:47:20.726003 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42ef5e1c-28bf-44f3-b614-7ce57b1daf4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42ef5e1c-28bf-44f3-b614-7ce57b1daf4c" (UID: "42ef5e1c-28bf-44f3-b614-7ce57b1daf4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:47:20 crc kubenswrapper[4752]: I0122 10:47:20.746995 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42ef5e1c-28bf-44f3-b614-7ce57b1daf4c-config-data" (OuterVolumeSpecName: "config-data") pod "42ef5e1c-28bf-44f3-b614-7ce57b1daf4c" (UID: "42ef5e1c-28bf-44f3-b614-7ce57b1daf4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:47:20 crc kubenswrapper[4752]: I0122 10:47:20.800718 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ef5e1c-28bf-44f3-b614-7ce57b1daf4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:20 crc kubenswrapper[4752]: I0122 10:47:20.800753 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ef5e1c-28bf-44f3-b614-7ce57b1daf4c-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:20 crc kubenswrapper[4752]: I0122 10:47:20.800764 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwl2j\" (UniqueName: \"kubernetes.io/projected/42ef5e1c-28bf-44f3-b614-7ce57b1daf4c-kube-api-access-hwl2j\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:20 crc kubenswrapper[4752]: I0122 10:47:20.866660 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 22 10:47:20 crc kubenswrapper[4752]: I0122 10:47:20.867068 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 22 10:47:20 crc kubenswrapper[4752]: I0122 10:47:20.867406 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 22 10:47:20 crc kubenswrapper[4752]: I0122 10:47:20.872396 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 22 10:47:20 crc kubenswrapper[4752]: I0122 10:47:20.972580 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 10:47:20 crc kubenswrapper[4752]: I0122 10:47:20.982481 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 10:47:20 crc kubenswrapper[4752]: I0122 10:47:20.994712 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 10:47:20 crc kubenswrapper[4752]: E0122 10:47:20.995210 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42ef5e1c-28bf-44f3-b614-7ce57b1daf4c" containerName="nova-cell1-novncproxy-novncproxy" Jan 22 10:47:20 crc kubenswrapper[4752]: I0122 10:47:20.995229 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="42ef5e1c-28bf-44f3-b614-7ce57b1daf4c" containerName="nova-cell1-novncproxy-novncproxy" Jan 22 10:47:20 crc kubenswrapper[4752]: I0122 10:47:20.995425 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="42ef5e1c-28bf-44f3-b614-7ce57b1daf4c" containerName="nova-cell1-novncproxy-novncproxy" Jan 22 10:47:20 crc kubenswrapper[4752]: I0122 10:47:20.996154 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:47:20 crc kubenswrapper[4752]: I0122 10:47:20.998275 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 22 10:47:20 crc kubenswrapper[4752]: I0122 10:47:20.998448 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 22 10:47:20 crc kubenswrapper[4752]: I0122 10:47:20.998805 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 22 10:47:21 crc kubenswrapper[4752]: I0122 10:47:21.004563 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 10:47:21 crc kubenswrapper[4752]: I0122 10:47:21.106668 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmzgb\" (UniqueName: \"kubernetes.io/projected/a2bb65e3-9732-44dc-8353-187eab29eff3-kube-api-access-fmzgb\") pod \"nova-cell1-novncproxy-0\" (UID: \"a2bb65e3-9732-44dc-8353-187eab29eff3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:47:21 crc kubenswrapper[4752]: I0122 10:47:21.106729 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2bb65e3-9732-44dc-8353-187eab29eff3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a2bb65e3-9732-44dc-8353-187eab29eff3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:47:21 crc kubenswrapper[4752]: I0122 10:47:21.106757 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2bb65e3-9732-44dc-8353-187eab29eff3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a2bb65e3-9732-44dc-8353-187eab29eff3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:47:21 crc kubenswrapper[4752]: I0122 10:47:21.106787 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2bb65e3-9732-44dc-8353-187eab29eff3-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a2bb65e3-9732-44dc-8353-187eab29eff3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:47:21 crc kubenswrapper[4752]: I0122 10:47:21.106945 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2bb65e3-9732-44dc-8353-187eab29eff3-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a2bb65e3-9732-44dc-8353-187eab29eff3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:47:21 crc kubenswrapper[4752]: I0122 10:47:21.111767 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42ef5e1c-28bf-44f3-b614-7ce57b1daf4c" path="/var/lib/kubelet/pods/42ef5e1c-28bf-44f3-b614-7ce57b1daf4c/volumes" Jan 22 10:47:21 crc kubenswrapper[4752]: I0122 10:47:21.208787 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2bb65e3-9732-44dc-8353-187eab29eff3-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a2bb65e3-9732-44dc-8353-187eab29eff3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:47:21 crc kubenswrapper[4752]: I0122 10:47:21.208985 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmzgb\" (UniqueName: \"kubernetes.io/projected/a2bb65e3-9732-44dc-8353-187eab29eff3-kube-api-access-fmzgb\") pod \"nova-cell1-novncproxy-0\" (UID: \"a2bb65e3-9732-44dc-8353-187eab29eff3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:47:21 crc kubenswrapper[4752]: I0122 10:47:21.209017 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2bb65e3-9732-44dc-8353-187eab29eff3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a2bb65e3-9732-44dc-8353-187eab29eff3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:47:21 crc kubenswrapper[4752]: I0122 10:47:21.209035 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2bb65e3-9732-44dc-8353-187eab29eff3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a2bb65e3-9732-44dc-8353-187eab29eff3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:47:21 crc kubenswrapper[4752]: I0122 10:47:21.209072 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2bb65e3-9732-44dc-8353-187eab29eff3-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a2bb65e3-9732-44dc-8353-187eab29eff3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:47:21 crc kubenswrapper[4752]: I0122 10:47:21.211181 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 22 10:47:21 crc kubenswrapper[4752]: I0122 10:47:21.211478 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 22 10:47:21 crc kubenswrapper[4752]: I0122 10:47:21.212199 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 22 10:47:21 crc kubenswrapper[4752]: I0122 10:47:21.221791 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2bb65e3-9732-44dc-8353-187eab29eff3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a2bb65e3-9732-44dc-8353-187eab29eff3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:47:21 crc kubenswrapper[4752]: I0122 10:47:21.223561 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2bb65e3-9732-44dc-8353-187eab29eff3-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a2bb65e3-9732-44dc-8353-187eab29eff3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:47:21 crc kubenswrapper[4752]: I0122 10:47:21.225655 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2bb65e3-9732-44dc-8353-187eab29eff3-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a2bb65e3-9732-44dc-8353-187eab29eff3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:47:21 crc kubenswrapper[4752]: I0122 10:47:21.229613 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmzgb\" (UniqueName: \"kubernetes.io/projected/a2bb65e3-9732-44dc-8353-187eab29eff3-kube-api-access-fmzgb\") pod \"nova-cell1-novncproxy-0\" (UID: \"a2bb65e3-9732-44dc-8353-187eab29eff3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:47:21 crc kubenswrapper[4752]: I0122 10:47:21.230329 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2bb65e3-9732-44dc-8353-187eab29eff3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a2bb65e3-9732-44dc-8353-187eab29eff3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:47:21 crc kubenswrapper[4752]: I0122 10:47:21.325520 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:47:21 crc kubenswrapper[4752]: I0122 10:47:21.654458 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 22 10:47:21 crc kubenswrapper[4752]: I0122 10:47:21.660925 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 22 10:47:22 crc kubenswrapper[4752]: I0122 10:47:22.085933 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 10:47:22 crc kubenswrapper[4752]: I0122 10:47:22.097012 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fcfbd48f-4mxp4"] Jan 22 10:47:22 crc kubenswrapper[4752]: I0122 10:47:22.100025 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfbd48f-4mxp4" Jan 22 10:47:22 crc kubenswrapper[4752]: I0122 10:47:22.147633 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcfbd48f-4mxp4"] Jan 22 10:47:22 crc kubenswrapper[4752]: I0122 10:47:22.294973 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc5ln\" (UniqueName: \"kubernetes.io/projected/dd937dca-75fa-4ec8-b570-7e8d3a749654-kube-api-access-fc5ln\") pod \"dnsmasq-dns-fcfbd48f-4mxp4\" (UID: \"dd937dca-75fa-4ec8-b570-7e8d3a749654\") " pod="openstack/dnsmasq-dns-fcfbd48f-4mxp4" Jan 22 10:47:22 crc kubenswrapper[4752]: I0122 10:47:22.295032 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd937dca-75fa-4ec8-b570-7e8d3a749654-dns-svc\") pod \"dnsmasq-dns-fcfbd48f-4mxp4\" (UID: \"dd937dca-75fa-4ec8-b570-7e8d3a749654\") " pod="openstack/dnsmasq-dns-fcfbd48f-4mxp4" Jan 22 10:47:22 crc kubenswrapper[4752]: I0122 10:47:22.295084 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd937dca-75fa-4ec8-b570-7e8d3a749654-config\") pod \"dnsmasq-dns-fcfbd48f-4mxp4\" (UID: \"dd937dca-75fa-4ec8-b570-7e8d3a749654\") " pod="openstack/dnsmasq-dns-fcfbd48f-4mxp4" Jan 22 10:47:22 crc kubenswrapper[4752]: I0122 10:47:22.295114 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd937dca-75fa-4ec8-b570-7e8d3a749654-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfbd48f-4mxp4\" (UID: \"dd937dca-75fa-4ec8-b570-7e8d3a749654\") " pod="openstack/dnsmasq-dns-fcfbd48f-4mxp4" Jan 22 10:47:22 crc kubenswrapper[4752]: I0122 10:47:22.295326 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dd937dca-75fa-4ec8-b570-7e8d3a749654-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfbd48f-4mxp4\" (UID: \"dd937dca-75fa-4ec8-b570-7e8d3a749654\") " pod="openstack/dnsmasq-dns-fcfbd48f-4mxp4" Jan 22 10:47:22 crc kubenswrapper[4752]: I0122 10:47:22.295417 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd937dca-75fa-4ec8-b570-7e8d3a749654-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfbd48f-4mxp4\" (UID: \"dd937dca-75fa-4ec8-b570-7e8d3a749654\") " pod="openstack/dnsmasq-dns-fcfbd48f-4mxp4" Jan 22 10:47:22 crc kubenswrapper[4752]: I0122 10:47:22.397821 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd937dca-75fa-4ec8-b570-7e8d3a749654-config\") pod \"dnsmasq-dns-fcfbd48f-4mxp4\" (UID: \"dd937dca-75fa-4ec8-b570-7e8d3a749654\") " pod="openstack/dnsmasq-dns-fcfbd48f-4mxp4" Jan 22 10:47:22 crc kubenswrapper[4752]: I0122 10:47:22.398386 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd937dca-75fa-4ec8-b570-7e8d3a749654-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfbd48f-4mxp4\" (UID: \"dd937dca-75fa-4ec8-b570-7e8d3a749654\") " pod="openstack/dnsmasq-dns-fcfbd48f-4mxp4" Jan 22 10:47:22 crc kubenswrapper[4752]: I0122 10:47:22.398740 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dd937dca-75fa-4ec8-b570-7e8d3a749654-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfbd48f-4mxp4\" (UID: \"dd937dca-75fa-4ec8-b570-7e8d3a749654\") " pod="openstack/dnsmasq-dns-fcfbd48f-4mxp4" Jan 22 10:47:22 crc kubenswrapper[4752]: I0122 10:47:22.398888 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd937dca-75fa-4ec8-b570-7e8d3a749654-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfbd48f-4mxp4\" (UID: \"dd937dca-75fa-4ec8-b570-7e8d3a749654\") " pod="openstack/dnsmasq-dns-fcfbd48f-4mxp4" Jan 22 10:47:22 crc kubenswrapper[4752]: I0122 10:47:22.398983 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd937dca-75fa-4ec8-b570-7e8d3a749654-dns-svc\") pod \"dnsmasq-dns-fcfbd48f-4mxp4\" (UID: \"dd937dca-75fa-4ec8-b570-7e8d3a749654\") " pod="openstack/dnsmasq-dns-fcfbd48f-4mxp4" Jan 22 10:47:22 crc kubenswrapper[4752]: I0122 10:47:22.399009 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc5ln\" (UniqueName: \"kubernetes.io/projected/dd937dca-75fa-4ec8-b570-7e8d3a749654-kube-api-access-fc5ln\") pod \"dnsmasq-dns-fcfbd48f-4mxp4\" (UID: \"dd937dca-75fa-4ec8-b570-7e8d3a749654\") " pod="openstack/dnsmasq-dns-fcfbd48f-4mxp4" Jan 22 10:47:22 crc kubenswrapper[4752]: I0122 10:47:22.399396 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd937dca-75fa-4ec8-b570-7e8d3a749654-config\") pod \"dnsmasq-dns-fcfbd48f-4mxp4\" (UID: \"dd937dca-75fa-4ec8-b570-7e8d3a749654\") " pod="openstack/dnsmasq-dns-fcfbd48f-4mxp4" Jan 22 10:47:22 crc kubenswrapper[4752]: I0122 10:47:22.399506 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd937dca-75fa-4ec8-b570-7e8d3a749654-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfbd48f-4mxp4\" (UID: \"dd937dca-75fa-4ec8-b570-7e8d3a749654\") " pod="openstack/dnsmasq-dns-fcfbd48f-4mxp4" Jan 22 10:47:22 crc kubenswrapper[4752]: I0122 10:47:22.399893 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dd937dca-75fa-4ec8-b570-7e8d3a749654-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfbd48f-4mxp4\" (UID: \"dd937dca-75fa-4ec8-b570-7e8d3a749654\") " pod="openstack/dnsmasq-dns-fcfbd48f-4mxp4" Jan 22 10:47:22 crc kubenswrapper[4752]: I0122 10:47:22.400104 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd937dca-75fa-4ec8-b570-7e8d3a749654-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfbd48f-4mxp4\" (UID: \"dd937dca-75fa-4ec8-b570-7e8d3a749654\") " pod="openstack/dnsmasq-dns-fcfbd48f-4mxp4" Jan 22 10:47:22 crc kubenswrapper[4752]: I0122 10:47:22.400904 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd937dca-75fa-4ec8-b570-7e8d3a749654-dns-svc\") pod \"dnsmasq-dns-fcfbd48f-4mxp4\" (UID: \"dd937dca-75fa-4ec8-b570-7e8d3a749654\") " pod="openstack/dnsmasq-dns-fcfbd48f-4mxp4" Jan 22 10:47:22 crc kubenswrapper[4752]: I0122 10:47:22.446737 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc5ln\" (UniqueName: \"kubernetes.io/projected/dd937dca-75fa-4ec8-b570-7e8d3a749654-kube-api-access-fc5ln\") pod \"dnsmasq-dns-fcfbd48f-4mxp4\" (UID: \"dd937dca-75fa-4ec8-b570-7e8d3a749654\") " pod="openstack/dnsmasq-dns-fcfbd48f-4mxp4" Jan 22 10:47:22 crc kubenswrapper[4752]: I0122 10:47:22.666058 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a2bb65e3-9732-44dc-8353-187eab29eff3","Type":"ContainerStarted","Data":"ade705f53261dd8abecaf4feb5805822a9c04a4be8d84e3e3e3b63c173b1b600"} Jan 22 10:47:22 crc kubenswrapper[4752]: I0122 10:47:22.667090 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a2bb65e3-9732-44dc-8353-187eab29eff3","Type":"ContainerStarted","Data":"05f52f5fff8a1864c007c81cb6d6e4689f6b92cd0317909dced083ecad98d4b1"} Jan 22 10:47:22 crc kubenswrapper[4752]: I0122 10:47:22.711927 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.71189896 podStartE2EDuration="2.71189896s" podCreationTimestamp="2026-01-22 10:47:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:47:22.691501958 +0000 UTC m=+1321.921444876" watchObservedRunningTime="2026-01-22 10:47:22.71189896 +0000 UTC m=+1321.941841868" Jan 22 10:47:22 crc kubenswrapper[4752]: I0122 10:47:22.737967 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfbd48f-4mxp4" Jan 22 10:47:22 crc kubenswrapper[4752]: I0122 10:47:22.990542 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 22 10:47:23 crc kubenswrapper[4752]: W0122 10:47:23.226704 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd937dca_75fa_4ec8_b570_7e8d3a749654.slice/crio-01120b643d29b24c05c2bb3945f44e0ca18f51b89a7d65e069deeb1ad71514ed WatchSource:0}: Error finding container 01120b643d29b24c05c2bb3945f44e0ca18f51b89a7d65e069deeb1ad71514ed: Status 404 returned error can't find the container with id 01120b643d29b24c05c2bb3945f44e0ca18f51b89a7d65e069deeb1ad71514ed Jan 22 10:47:23 crc kubenswrapper[4752]: I0122 10:47:23.227139 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcfbd48f-4mxp4"] Jan 22 10:47:23 crc kubenswrapper[4752]: I0122 10:47:23.681252 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfbd48f-4mxp4" event={"ID":"dd937dca-75fa-4ec8-b570-7e8d3a749654","Type":"ContainerStarted","Data":"01120b643d29b24c05c2bb3945f44e0ca18f51b89a7d65e069deeb1ad71514ed"} Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.176186 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.342131 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/680367ae-6a30-4fc4-8c40-f03746ef1288-run-httpd\") pod \"680367ae-6a30-4fc4-8c40-f03746ef1288\" (UID: \"680367ae-6a30-4fc4-8c40-f03746ef1288\") " Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.342259 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qn9cs\" (UniqueName: \"kubernetes.io/projected/680367ae-6a30-4fc4-8c40-f03746ef1288-kube-api-access-qn9cs\") pod \"680367ae-6a30-4fc4-8c40-f03746ef1288\" (UID: \"680367ae-6a30-4fc4-8c40-f03746ef1288\") " Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.342306 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/680367ae-6a30-4fc4-8c40-f03746ef1288-log-httpd\") pod \"680367ae-6a30-4fc4-8c40-f03746ef1288\" (UID: \"680367ae-6a30-4fc4-8c40-f03746ef1288\") " Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.342365 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/680367ae-6a30-4fc4-8c40-f03746ef1288-combined-ca-bundle\") pod \"680367ae-6a30-4fc4-8c40-f03746ef1288\" (UID: \"680367ae-6a30-4fc4-8c40-f03746ef1288\") " Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.342594 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/680367ae-6a30-4fc4-8c40-f03746ef1288-config-data\") pod \"680367ae-6a30-4fc4-8c40-f03746ef1288\" (UID: \"680367ae-6a30-4fc4-8c40-f03746ef1288\") " Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.342632 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/680367ae-6a30-4fc4-8c40-f03746ef1288-sg-core-conf-yaml\") pod \"680367ae-6a30-4fc4-8c40-f03746ef1288\" (UID: \"680367ae-6a30-4fc4-8c40-f03746ef1288\") " Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.342662 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/680367ae-6a30-4fc4-8c40-f03746ef1288-scripts\") pod \"680367ae-6a30-4fc4-8c40-f03746ef1288\" (UID: \"680367ae-6a30-4fc4-8c40-f03746ef1288\") " Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.344496 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/680367ae-6a30-4fc4-8c40-f03746ef1288-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "680367ae-6a30-4fc4-8c40-f03746ef1288" (UID: "680367ae-6a30-4fc4-8c40-f03746ef1288"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.346704 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/680367ae-6a30-4fc4-8c40-f03746ef1288-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "680367ae-6a30-4fc4-8c40-f03746ef1288" (UID: "680367ae-6a30-4fc4-8c40-f03746ef1288"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.350257 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/680367ae-6a30-4fc4-8c40-f03746ef1288-scripts" (OuterVolumeSpecName: "scripts") pod "680367ae-6a30-4fc4-8c40-f03746ef1288" (UID: "680367ae-6a30-4fc4-8c40-f03746ef1288"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.355052 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/680367ae-6a30-4fc4-8c40-f03746ef1288-kube-api-access-qn9cs" (OuterVolumeSpecName: "kube-api-access-qn9cs") pod "680367ae-6a30-4fc4-8c40-f03746ef1288" (UID: "680367ae-6a30-4fc4-8c40-f03746ef1288"). InnerVolumeSpecName "kube-api-access-qn9cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.445145 4752 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/680367ae-6a30-4fc4-8c40-f03746ef1288-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.445459 4752 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/680367ae-6a30-4fc4-8c40-f03746ef1288-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.445471 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qn9cs\" (UniqueName: \"kubernetes.io/projected/680367ae-6a30-4fc4-8c40-f03746ef1288-kube-api-access-qn9cs\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.445480 4752 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/680367ae-6a30-4fc4-8c40-f03746ef1288-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.474579 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/680367ae-6a30-4fc4-8c40-f03746ef1288-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "680367ae-6a30-4fc4-8c40-f03746ef1288" (UID: "680367ae-6a30-4fc4-8c40-f03746ef1288"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.541005 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/680367ae-6a30-4fc4-8c40-f03746ef1288-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "680367ae-6a30-4fc4-8c40-f03746ef1288" (UID: "680367ae-6a30-4fc4-8c40-f03746ef1288"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.549048 4752 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/680367ae-6a30-4fc4-8c40-f03746ef1288-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.549080 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/680367ae-6a30-4fc4-8c40-f03746ef1288-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.622951 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/680367ae-6a30-4fc4-8c40-f03746ef1288-config-data" (OuterVolumeSpecName: "config-data") pod "680367ae-6a30-4fc4-8c40-f03746ef1288" (UID: "680367ae-6a30-4fc4-8c40-f03746ef1288"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.651161 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/680367ae-6a30-4fc4-8c40-f03746ef1288-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.692305 4752 generic.go:334] "Generic (PLEG): container finished" podID="680367ae-6a30-4fc4-8c40-f03746ef1288" containerID="153ce50797dd5f691eec3265271048a3f4d454f34455a9a1f7bf47df2268f3ae" exitCode=0 Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.692362 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"680367ae-6a30-4fc4-8c40-f03746ef1288","Type":"ContainerDied","Data":"153ce50797dd5f691eec3265271048a3f4d454f34455a9a1f7bf47df2268f3ae"} Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.692389 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"680367ae-6a30-4fc4-8c40-f03746ef1288","Type":"ContainerDied","Data":"737865b0bfae2293b1783d9193f62edeff1b97a2a72d8fcf31d7aaf6ec103103"} Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.692406 4752 scope.go:117] "RemoveContainer" containerID="39e6643d2a300ec0a93441b03b26821bf8d8d594b5faffce38a3496a9acb2ff5" Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.692520 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.714224 4752 generic.go:334] "Generic (PLEG): container finished" podID="dd937dca-75fa-4ec8-b570-7e8d3a749654" containerID="98cb387208a42842e71f0bd2e2ae4e6be58ffec213c8a77534f9060a8acf3ac3" exitCode=0 Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.715120 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfbd48f-4mxp4" event={"ID":"dd937dca-75fa-4ec8-b570-7e8d3a749654","Type":"ContainerDied","Data":"98cb387208a42842e71f0bd2e2ae4e6be58ffec213c8a77534f9060a8acf3ac3"} Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.730671 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.732883 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4df40f3f-4152-45ab-9509-e9a93221a9c5" containerName="nova-api-api" containerID="cri-o://9a99ceb47d7b122b62b40bfbddc13eaba59d64f7b76c287905372ef34450803d" gracePeriod=30 Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.739932 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4df40f3f-4152-45ab-9509-e9a93221a9c5" containerName="nova-api-log" containerID="cri-o://66167773bc45dd75da683b84286928ab05a752ad9165a49dd8f18ba308091f87" gracePeriod=30 Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.778167 4752 scope.go:117] "RemoveContainer" containerID="3633a1daee0c105af074e3ab517add0dc7e16751f9a406715cf0a2e598221c0e" Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.839106 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.871459 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.881523 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:47:24 crc kubenswrapper[4752]: E0122 10:47:24.881978 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="680367ae-6a30-4fc4-8c40-f03746ef1288" containerName="ceilometer-notification-agent" Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.882006 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="680367ae-6a30-4fc4-8c40-f03746ef1288" containerName="ceilometer-notification-agent" Jan 22 10:47:24 crc kubenswrapper[4752]: E0122 10:47:24.882019 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="680367ae-6a30-4fc4-8c40-f03746ef1288" containerName="ceilometer-central-agent" Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.882027 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="680367ae-6a30-4fc4-8c40-f03746ef1288" containerName="ceilometer-central-agent" Jan 22 10:47:24 crc kubenswrapper[4752]: E0122 10:47:24.882041 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="680367ae-6a30-4fc4-8c40-f03746ef1288" containerName="sg-core" Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.882048 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="680367ae-6a30-4fc4-8c40-f03746ef1288" containerName="sg-core" Jan 22 10:47:24 crc kubenswrapper[4752]: E0122 10:47:24.882065 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="680367ae-6a30-4fc4-8c40-f03746ef1288" containerName="proxy-httpd" Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.882073 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="680367ae-6a30-4fc4-8c40-f03746ef1288" containerName="proxy-httpd" Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.882308 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="680367ae-6a30-4fc4-8c40-f03746ef1288" containerName="sg-core" Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.882325 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="680367ae-6a30-4fc4-8c40-f03746ef1288" containerName="proxy-httpd" Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.882339 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="680367ae-6a30-4fc4-8c40-f03746ef1288" containerName="ceilometer-central-agent" Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.882348 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="680367ae-6a30-4fc4-8c40-f03746ef1288" containerName="ceilometer-notification-agent" Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.884183 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.886724 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.886792 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.887187 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 22 10:47:24 crc kubenswrapper[4752]: I0122 10:47:24.899462 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.054203 4752 scope.go:117] "RemoveContainer" containerID="153ce50797dd5f691eec3265271048a3f4d454f34455a9a1f7bf47df2268f3ae" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.071695 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e20ee33-2442-4708-a491-c2708aaced7c-run-httpd\") pod \"ceilometer-0\" (UID: \"0e20ee33-2442-4708-a491-c2708aaced7c\") " pod="openstack/ceilometer-0" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.071763 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e20ee33-2442-4708-a491-c2708aaced7c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0e20ee33-2442-4708-a491-c2708aaced7c\") " pod="openstack/ceilometer-0" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.071834 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e20ee33-2442-4708-a491-c2708aaced7c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0e20ee33-2442-4708-a491-c2708aaced7c\") " pod="openstack/ceilometer-0" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.071891 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e20ee33-2442-4708-a491-c2708aaced7c-log-httpd\") pod \"ceilometer-0\" (UID: \"0e20ee33-2442-4708-a491-c2708aaced7c\") " pod="openstack/ceilometer-0" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.071916 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f886\" (UniqueName: \"kubernetes.io/projected/0e20ee33-2442-4708-a491-c2708aaced7c-kube-api-access-5f886\") pod \"ceilometer-0\" (UID: \"0e20ee33-2442-4708-a491-c2708aaced7c\") " pod="openstack/ceilometer-0" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.071986 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e20ee33-2442-4708-a491-c2708aaced7c-config-data\") pod \"ceilometer-0\" (UID: \"0e20ee33-2442-4708-a491-c2708aaced7c\") " pod="openstack/ceilometer-0" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.072115 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e20ee33-2442-4708-a491-c2708aaced7c-scripts\") pod \"ceilometer-0\" (UID: \"0e20ee33-2442-4708-a491-c2708aaced7c\") " pod="openstack/ceilometer-0" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.072141 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e20ee33-2442-4708-a491-c2708aaced7c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0e20ee33-2442-4708-a491-c2708aaced7c\") " pod="openstack/ceilometer-0" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.099519 4752 scope.go:117] "RemoveContainer" containerID="e19e55f72cff0ca0920b2c579961d08ebe4cd32179d382ea8142b0e19191495d" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.112709 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="680367ae-6a30-4fc4-8c40-f03746ef1288" path="/var/lib/kubelet/pods/680367ae-6a30-4fc4-8c40-f03746ef1288/volumes" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.128737 4752 scope.go:117] "RemoveContainer" containerID="39e6643d2a300ec0a93441b03b26821bf8d8d594b5faffce38a3496a9acb2ff5" Jan 22 10:47:25 crc kubenswrapper[4752]: E0122 10:47:25.129340 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39e6643d2a300ec0a93441b03b26821bf8d8d594b5faffce38a3496a9acb2ff5\": container with ID starting with 39e6643d2a300ec0a93441b03b26821bf8d8d594b5faffce38a3496a9acb2ff5 not found: ID does not exist" containerID="39e6643d2a300ec0a93441b03b26821bf8d8d594b5faffce38a3496a9acb2ff5" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.129404 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39e6643d2a300ec0a93441b03b26821bf8d8d594b5faffce38a3496a9acb2ff5"} err="failed to get container status \"39e6643d2a300ec0a93441b03b26821bf8d8d594b5faffce38a3496a9acb2ff5\": rpc error: code = NotFound desc = could not find container \"39e6643d2a300ec0a93441b03b26821bf8d8d594b5faffce38a3496a9acb2ff5\": container with ID starting with 39e6643d2a300ec0a93441b03b26821bf8d8d594b5faffce38a3496a9acb2ff5 not found: ID does not exist" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.129448 4752 scope.go:117] "RemoveContainer" containerID="3633a1daee0c105af074e3ab517add0dc7e16751f9a406715cf0a2e598221c0e" Jan 22 10:47:25 crc kubenswrapper[4752]: E0122 10:47:25.129956 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3633a1daee0c105af074e3ab517add0dc7e16751f9a406715cf0a2e598221c0e\": container with ID starting with 3633a1daee0c105af074e3ab517add0dc7e16751f9a406715cf0a2e598221c0e not found: ID does not exist" containerID="3633a1daee0c105af074e3ab517add0dc7e16751f9a406715cf0a2e598221c0e" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.130003 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3633a1daee0c105af074e3ab517add0dc7e16751f9a406715cf0a2e598221c0e"} err="failed to get container status \"3633a1daee0c105af074e3ab517add0dc7e16751f9a406715cf0a2e598221c0e\": rpc error: code = NotFound desc = could not find container \"3633a1daee0c105af074e3ab517add0dc7e16751f9a406715cf0a2e598221c0e\": container with ID starting with 3633a1daee0c105af074e3ab517add0dc7e16751f9a406715cf0a2e598221c0e not found: ID does not exist" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.130019 4752 scope.go:117] "RemoveContainer" containerID="153ce50797dd5f691eec3265271048a3f4d454f34455a9a1f7bf47df2268f3ae" Jan 22 10:47:25 crc kubenswrapper[4752]: E0122 10:47:25.130518 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"153ce50797dd5f691eec3265271048a3f4d454f34455a9a1f7bf47df2268f3ae\": container with ID starting with 153ce50797dd5f691eec3265271048a3f4d454f34455a9a1f7bf47df2268f3ae not found: ID does not exist" containerID="153ce50797dd5f691eec3265271048a3f4d454f34455a9a1f7bf47df2268f3ae" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.130560 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"153ce50797dd5f691eec3265271048a3f4d454f34455a9a1f7bf47df2268f3ae"} err="failed to get container status \"153ce50797dd5f691eec3265271048a3f4d454f34455a9a1f7bf47df2268f3ae\": rpc error: code = NotFound desc = could not find container \"153ce50797dd5f691eec3265271048a3f4d454f34455a9a1f7bf47df2268f3ae\": container with ID starting with 153ce50797dd5f691eec3265271048a3f4d454f34455a9a1f7bf47df2268f3ae not found: ID does not exist" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.130598 4752 scope.go:117] "RemoveContainer" containerID="e19e55f72cff0ca0920b2c579961d08ebe4cd32179d382ea8142b0e19191495d" Jan 22 10:47:25 crc kubenswrapper[4752]: E0122 10:47:25.131049 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e19e55f72cff0ca0920b2c579961d08ebe4cd32179d382ea8142b0e19191495d\": container with ID starting with e19e55f72cff0ca0920b2c579961d08ebe4cd32179d382ea8142b0e19191495d not found: ID does not exist" containerID="e19e55f72cff0ca0920b2c579961d08ebe4cd32179d382ea8142b0e19191495d" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.131075 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e19e55f72cff0ca0920b2c579961d08ebe4cd32179d382ea8142b0e19191495d"} err="failed to get container status \"e19e55f72cff0ca0920b2c579961d08ebe4cd32179d382ea8142b0e19191495d\": rpc error: code = NotFound desc = could not find container \"e19e55f72cff0ca0920b2c579961d08ebe4cd32179d382ea8142b0e19191495d\": container with ID starting with e19e55f72cff0ca0920b2c579961d08ebe4cd32179d382ea8142b0e19191495d not found: ID does not exist" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.174451 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e20ee33-2442-4708-a491-c2708aaced7c-scripts\") pod \"ceilometer-0\" (UID: \"0e20ee33-2442-4708-a491-c2708aaced7c\") " pod="openstack/ceilometer-0" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.174524 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e20ee33-2442-4708-a491-c2708aaced7c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0e20ee33-2442-4708-a491-c2708aaced7c\") " pod="openstack/ceilometer-0" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.174582 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e20ee33-2442-4708-a491-c2708aaced7c-run-httpd\") pod \"ceilometer-0\" (UID: \"0e20ee33-2442-4708-a491-c2708aaced7c\") " pod="openstack/ceilometer-0" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.174616 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e20ee33-2442-4708-a491-c2708aaced7c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0e20ee33-2442-4708-a491-c2708aaced7c\") " pod="openstack/ceilometer-0" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.174636 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e20ee33-2442-4708-a491-c2708aaced7c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0e20ee33-2442-4708-a491-c2708aaced7c\") " pod="openstack/ceilometer-0" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.174662 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e20ee33-2442-4708-a491-c2708aaced7c-log-httpd\") pod \"ceilometer-0\" (UID: \"0e20ee33-2442-4708-a491-c2708aaced7c\") " pod="openstack/ceilometer-0" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.174681 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f886\" (UniqueName: \"kubernetes.io/projected/0e20ee33-2442-4708-a491-c2708aaced7c-kube-api-access-5f886\") pod \"ceilometer-0\" (UID: \"0e20ee33-2442-4708-a491-c2708aaced7c\") " pod="openstack/ceilometer-0" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.174737 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e20ee33-2442-4708-a491-c2708aaced7c-config-data\") pod \"ceilometer-0\" (UID: \"0e20ee33-2442-4708-a491-c2708aaced7c\") " pod="openstack/ceilometer-0" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.176292 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e20ee33-2442-4708-a491-c2708aaced7c-log-httpd\") pod \"ceilometer-0\" (UID: \"0e20ee33-2442-4708-a491-c2708aaced7c\") " pod="openstack/ceilometer-0" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.176289 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e20ee33-2442-4708-a491-c2708aaced7c-run-httpd\") pod \"ceilometer-0\" (UID: \"0e20ee33-2442-4708-a491-c2708aaced7c\") " pod="openstack/ceilometer-0" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.182055 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e20ee33-2442-4708-a491-c2708aaced7c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0e20ee33-2442-4708-a491-c2708aaced7c\") " pod="openstack/ceilometer-0" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.182128 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e20ee33-2442-4708-a491-c2708aaced7c-scripts\") pod \"ceilometer-0\" (UID: \"0e20ee33-2442-4708-a491-c2708aaced7c\") " pod="openstack/ceilometer-0" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.182482 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e20ee33-2442-4708-a491-c2708aaced7c-config-data\") pod \"ceilometer-0\" (UID: \"0e20ee33-2442-4708-a491-c2708aaced7c\") " pod="openstack/ceilometer-0" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.182702 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e20ee33-2442-4708-a491-c2708aaced7c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0e20ee33-2442-4708-a491-c2708aaced7c\") " pod="openstack/ceilometer-0" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.188597 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e20ee33-2442-4708-a491-c2708aaced7c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0e20ee33-2442-4708-a491-c2708aaced7c\") " pod="openstack/ceilometer-0" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.203661 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f886\" (UniqueName: \"kubernetes.io/projected/0e20ee33-2442-4708-a491-c2708aaced7c-kube-api-access-5f886\") pod \"ceilometer-0\" (UID: \"0e20ee33-2442-4708-a491-c2708aaced7c\") " pod="openstack/ceilometer-0" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.207765 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.574453 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.729165 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfbd48f-4mxp4" event={"ID":"dd937dca-75fa-4ec8-b570-7e8d3a749654","Type":"ContainerStarted","Data":"21e442c192ac915b8fdd6f9fc0e7c935fe04f0df6ef7df185c0258c7f8234413"} Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.729289 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fcfbd48f-4mxp4" Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.734427 4752 generic.go:334] "Generic (PLEG): container finished" podID="4df40f3f-4152-45ab-9509-e9a93221a9c5" containerID="66167773bc45dd75da683b84286928ab05a752ad9165a49dd8f18ba308091f87" exitCode=143 Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.734487 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4df40f3f-4152-45ab-9509-e9a93221a9c5","Type":"ContainerDied","Data":"66167773bc45dd75da683b84286928ab05a752ad9165a49dd8f18ba308091f87"} Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.744083 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:47:25 crc kubenswrapper[4752]: I0122 10:47:25.774016 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fcfbd48f-4mxp4" podStartSLOduration=4.773993463 podStartE2EDuration="4.773993463s" podCreationTimestamp="2026-01-22 10:47:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:47:25.759438021 +0000 UTC m=+1324.989380939" watchObservedRunningTime="2026-01-22 10:47:25.773993463 +0000 UTC m=+1325.003936371" Jan 22 10:47:25 crc kubenswrapper[4752]: W0122 10:47:25.776922 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e20ee33_2442_4708_a491_c2708aaced7c.slice/crio-3d1b0e62fee62093944a18f4c516364a6013f1927006ed16b9da34e16a01ab24 WatchSource:0}: Error finding container 3d1b0e62fee62093944a18f4c516364a6013f1927006ed16b9da34e16a01ab24: Status 404 returned error can't find the container with id 3d1b0e62fee62093944a18f4c516364a6013f1927006ed16b9da34e16a01ab24 Jan 22 10:47:26 crc kubenswrapper[4752]: I0122 10:47:26.326593 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:47:26 crc kubenswrapper[4752]: I0122 10:47:26.749814 4752 generic.go:334] "Generic (PLEG): container finished" podID="4df40f3f-4152-45ab-9509-e9a93221a9c5" containerID="9a99ceb47d7b122b62b40bfbddc13eaba59d64f7b76c287905372ef34450803d" exitCode=0 Jan 22 10:47:26 crc kubenswrapper[4752]: I0122 10:47:26.749926 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4df40f3f-4152-45ab-9509-e9a93221a9c5","Type":"ContainerDied","Data":"9a99ceb47d7b122b62b40bfbddc13eaba59d64f7b76c287905372ef34450803d"} Jan 22 10:47:26 crc kubenswrapper[4752]: I0122 10:47:26.754223 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e20ee33-2442-4708-a491-c2708aaced7c","Type":"ContainerStarted","Data":"aec4fcf0a536bf2ec7c791a4ef41cb51cc3db86bd8f7c2d56bb7d77eab7cad2c"} Jan 22 10:47:26 crc kubenswrapper[4752]: I0122 10:47:26.754257 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e20ee33-2442-4708-a491-c2708aaced7c","Type":"ContainerStarted","Data":"bd38944f3f808f3ed67e68b89204a9b3fe33816e705be263c4006713e1e957d5"} Jan 22 10:47:26 crc kubenswrapper[4752]: I0122 10:47:26.754266 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e20ee33-2442-4708-a491-c2708aaced7c","Type":"ContainerStarted","Data":"3d1b0e62fee62093944a18f4c516364a6013f1927006ed16b9da34e16a01ab24"} Jan 22 10:47:27 crc kubenswrapper[4752]: I0122 10:47:27.028968 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 10:47:27 crc kubenswrapper[4752]: I0122 10:47:27.227337 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwvfb\" (UniqueName: \"kubernetes.io/projected/4df40f3f-4152-45ab-9509-e9a93221a9c5-kube-api-access-lwvfb\") pod \"4df40f3f-4152-45ab-9509-e9a93221a9c5\" (UID: \"4df40f3f-4152-45ab-9509-e9a93221a9c5\") " Jan 22 10:47:27 crc kubenswrapper[4752]: I0122 10:47:27.227784 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4df40f3f-4152-45ab-9509-e9a93221a9c5-config-data\") pod \"4df40f3f-4152-45ab-9509-e9a93221a9c5\" (UID: \"4df40f3f-4152-45ab-9509-e9a93221a9c5\") " Jan 22 10:47:27 crc kubenswrapper[4752]: I0122 10:47:27.227879 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4df40f3f-4152-45ab-9509-e9a93221a9c5-logs\") pod \"4df40f3f-4152-45ab-9509-e9a93221a9c5\" (UID: \"4df40f3f-4152-45ab-9509-e9a93221a9c5\") " Jan 22 10:47:27 crc kubenswrapper[4752]: I0122 10:47:27.228103 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df40f3f-4152-45ab-9509-e9a93221a9c5-combined-ca-bundle\") pod \"4df40f3f-4152-45ab-9509-e9a93221a9c5\" (UID: \"4df40f3f-4152-45ab-9509-e9a93221a9c5\") " Jan 22 10:47:27 crc kubenswrapper[4752]: I0122 10:47:27.228286 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4df40f3f-4152-45ab-9509-e9a93221a9c5-logs" (OuterVolumeSpecName: "logs") pod "4df40f3f-4152-45ab-9509-e9a93221a9c5" (UID: "4df40f3f-4152-45ab-9509-e9a93221a9c5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:47:27 crc kubenswrapper[4752]: I0122 10:47:27.228718 4752 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4df40f3f-4152-45ab-9509-e9a93221a9c5-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:27 crc kubenswrapper[4752]: I0122 10:47:27.256316 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4df40f3f-4152-45ab-9509-e9a93221a9c5-kube-api-access-lwvfb" (OuterVolumeSpecName: "kube-api-access-lwvfb") pod "4df40f3f-4152-45ab-9509-e9a93221a9c5" (UID: "4df40f3f-4152-45ab-9509-e9a93221a9c5"). InnerVolumeSpecName "kube-api-access-lwvfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:47:27 crc kubenswrapper[4752]: I0122 10:47:27.294135 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4df40f3f-4152-45ab-9509-e9a93221a9c5-config-data" (OuterVolumeSpecName: "config-data") pod "4df40f3f-4152-45ab-9509-e9a93221a9c5" (UID: "4df40f3f-4152-45ab-9509-e9a93221a9c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:47:27 crc kubenswrapper[4752]: I0122 10:47:27.297019 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4df40f3f-4152-45ab-9509-e9a93221a9c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4df40f3f-4152-45ab-9509-e9a93221a9c5" (UID: "4df40f3f-4152-45ab-9509-e9a93221a9c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:47:27 crc kubenswrapper[4752]: I0122 10:47:27.332149 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df40f3f-4152-45ab-9509-e9a93221a9c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:27 crc kubenswrapper[4752]: I0122 10:47:27.332184 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwvfb\" (UniqueName: \"kubernetes.io/projected/4df40f3f-4152-45ab-9509-e9a93221a9c5-kube-api-access-lwvfb\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:27 crc kubenswrapper[4752]: I0122 10:47:27.332194 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4df40f3f-4152-45ab-9509-e9a93221a9c5-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:27 crc kubenswrapper[4752]: I0122 10:47:27.765043 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 10:47:27 crc kubenswrapper[4752]: I0122 10:47:27.764965 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4df40f3f-4152-45ab-9509-e9a93221a9c5","Type":"ContainerDied","Data":"4122330ed1db6ae0c494b9e49c1149987aaaf5cf7153860d1b78b94ed09a7f29"} Jan 22 10:47:27 crc kubenswrapper[4752]: I0122 10:47:27.765355 4752 scope.go:117] "RemoveContainer" containerID="9a99ceb47d7b122b62b40bfbddc13eaba59d64f7b76c287905372ef34450803d" Jan 22 10:47:27 crc kubenswrapper[4752]: I0122 10:47:27.767984 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e20ee33-2442-4708-a491-c2708aaced7c","Type":"ContainerStarted","Data":"bbe75551e662c33fc2485a91cd1d216fe6885d9042dee89b68a60b5ef536167a"} Jan 22 10:47:27 crc kubenswrapper[4752]: I0122 10:47:27.800610 4752 scope.go:117] "RemoveContainer" containerID="66167773bc45dd75da683b84286928ab05a752ad9165a49dd8f18ba308091f87" Jan 22 10:47:27 crc kubenswrapper[4752]: I0122 10:47:27.800698 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 10:47:27 crc kubenswrapper[4752]: I0122 10:47:27.831152 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 22 10:47:27 crc kubenswrapper[4752]: I0122 10:47:27.861948 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 22 10:47:27 crc kubenswrapper[4752]: E0122 10:47:27.862920 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4df40f3f-4152-45ab-9509-e9a93221a9c5" containerName="nova-api-api" Jan 22 10:47:27 crc kubenswrapper[4752]: I0122 10:47:27.862944 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="4df40f3f-4152-45ab-9509-e9a93221a9c5" containerName="nova-api-api" Jan 22 10:47:27 crc kubenswrapper[4752]: E0122 10:47:27.862966 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4df40f3f-4152-45ab-9509-e9a93221a9c5" containerName="nova-api-log" Jan 22 10:47:27 crc kubenswrapper[4752]: I0122 10:47:27.862972 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="4df40f3f-4152-45ab-9509-e9a93221a9c5" containerName="nova-api-log" Jan 22 10:47:27 crc kubenswrapper[4752]: I0122 10:47:27.863284 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="4df40f3f-4152-45ab-9509-e9a93221a9c5" containerName="nova-api-log" Jan 22 10:47:27 crc kubenswrapper[4752]: I0122 10:47:27.863316 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="4df40f3f-4152-45ab-9509-e9a93221a9c5" containerName="nova-api-api" Jan 22 10:47:27 crc kubenswrapper[4752]: I0122 10:47:27.864807 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 10:47:27 crc kubenswrapper[4752]: I0122 10:47:27.879803 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 22 10:47:27 crc kubenswrapper[4752]: I0122 10:47:27.880240 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 22 10:47:27 crc kubenswrapper[4752]: I0122 10:47:27.881065 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 22 10:47:27 crc kubenswrapper[4752]: I0122 10:47:27.882787 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 10:47:28 crc kubenswrapper[4752]: I0122 10:47:28.046624 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d0f435e-1113-41cd-9154-fc1db045abe1-public-tls-certs\") pod \"nova-api-0\" (UID: \"5d0f435e-1113-41cd-9154-fc1db045abe1\") " pod="openstack/nova-api-0" Jan 22 10:47:28 crc kubenswrapper[4752]: I0122 10:47:28.047025 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d0f435e-1113-41cd-9154-fc1db045abe1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5d0f435e-1113-41cd-9154-fc1db045abe1\") " pod="openstack/nova-api-0" Jan 22 10:47:28 crc kubenswrapper[4752]: I0122 10:47:28.047189 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d0f435e-1113-41cd-9154-fc1db045abe1-logs\") pod \"nova-api-0\" (UID: \"5d0f435e-1113-41cd-9154-fc1db045abe1\") " pod="openstack/nova-api-0" Jan 22 10:47:28 crc kubenswrapper[4752]: I0122 10:47:28.047232 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d0f435e-1113-41cd-9154-fc1db045abe1-config-data\") pod \"nova-api-0\" (UID: \"5d0f435e-1113-41cd-9154-fc1db045abe1\") " pod="openstack/nova-api-0" Jan 22 10:47:28 crc kubenswrapper[4752]: I0122 10:47:28.047272 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmtl4\" (UniqueName: \"kubernetes.io/projected/5d0f435e-1113-41cd-9154-fc1db045abe1-kube-api-access-wmtl4\") pod \"nova-api-0\" (UID: \"5d0f435e-1113-41cd-9154-fc1db045abe1\") " pod="openstack/nova-api-0" Jan 22 10:47:28 crc kubenswrapper[4752]: I0122 10:47:28.047376 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d0f435e-1113-41cd-9154-fc1db045abe1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5d0f435e-1113-41cd-9154-fc1db045abe1\") " pod="openstack/nova-api-0" Jan 22 10:47:28 crc kubenswrapper[4752]: I0122 10:47:28.148695 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d0f435e-1113-41cd-9154-fc1db045abe1-logs\") pod \"nova-api-0\" (UID: \"5d0f435e-1113-41cd-9154-fc1db045abe1\") " pod="openstack/nova-api-0" Jan 22 10:47:28 crc kubenswrapper[4752]: I0122 10:47:28.148768 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d0f435e-1113-41cd-9154-fc1db045abe1-config-data\") pod \"nova-api-0\" (UID: \"5d0f435e-1113-41cd-9154-fc1db045abe1\") " pod="openstack/nova-api-0" Jan 22 10:47:28 crc kubenswrapper[4752]: I0122 10:47:28.148805 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmtl4\" (UniqueName: \"kubernetes.io/projected/5d0f435e-1113-41cd-9154-fc1db045abe1-kube-api-access-wmtl4\") pod \"nova-api-0\" (UID: \"5d0f435e-1113-41cd-9154-fc1db045abe1\") " pod="openstack/nova-api-0" Jan 22 10:47:28 crc kubenswrapper[4752]: I0122 10:47:28.148934 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d0f435e-1113-41cd-9154-fc1db045abe1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5d0f435e-1113-41cd-9154-fc1db045abe1\") " pod="openstack/nova-api-0" Jan 22 10:47:28 crc kubenswrapper[4752]: I0122 10:47:28.149133 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d0f435e-1113-41cd-9154-fc1db045abe1-logs\") pod \"nova-api-0\" (UID: \"5d0f435e-1113-41cd-9154-fc1db045abe1\") " pod="openstack/nova-api-0" Jan 22 10:47:28 crc kubenswrapper[4752]: I0122 10:47:28.149324 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d0f435e-1113-41cd-9154-fc1db045abe1-public-tls-certs\") pod \"nova-api-0\" (UID: \"5d0f435e-1113-41cd-9154-fc1db045abe1\") " pod="openstack/nova-api-0" Jan 22 10:47:28 crc kubenswrapper[4752]: I0122 10:47:28.149670 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d0f435e-1113-41cd-9154-fc1db045abe1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5d0f435e-1113-41cd-9154-fc1db045abe1\") " pod="openstack/nova-api-0" Jan 22 10:47:28 crc kubenswrapper[4752]: I0122 10:47:28.153806 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d0f435e-1113-41cd-9154-fc1db045abe1-public-tls-certs\") pod \"nova-api-0\" (UID: \"5d0f435e-1113-41cd-9154-fc1db045abe1\") " pod="openstack/nova-api-0" Jan 22 10:47:28 crc kubenswrapper[4752]: I0122 10:47:28.154371 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d0f435e-1113-41cd-9154-fc1db045abe1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5d0f435e-1113-41cd-9154-fc1db045abe1\") " pod="openstack/nova-api-0" Jan 22 10:47:28 crc kubenswrapper[4752]: I0122 10:47:28.154411 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d0f435e-1113-41cd-9154-fc1db045abe1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5d0f435e-1113-41cd-9154-fc1db045abe1\") " pod="openstack/nova-api-0" Jan 22 10:47:28 crc kubenswrapper[4752]: I0122 10:47:28.154436 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d0f435e-1113-41cd-9154-fc1db045abe1-config-data\") pod \"nova-api-0\" (UID: \"5d0f435e-1113-41cd-9154-fc1db045abe1\") " pod="openstack/nova-api-0" Jan 22 10:47:28 crc kubenswrapper[4752]: I0122 10:47:28.173896 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmtl4\" (UniqueName: \"kubernetes.io/projected/5d0f435e-1113-41cd-9154-fc1db045abe1-kube-api-access-wmtl4\") pod \"nova-api-0\" (UID: \"5d0f435e-1113-41cd-9154-fc1db045abe1\") " pod="openstack/nova-api-0" Jan 22 10:47:28 crc kubenswrapper[4752]: I0122 10:47:28.187053 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 10:47:28 crc kubenswrapper[4752]: W0122 10:47:28.688006 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d0f435e_1113_41cd_9154_fc1db045abe1.slice/crio-33f5c4b8db06e804bf15d45cb4eb3fd06a75e01a8d4a3f1075a861d5ae3a4ee0 WatchSource:0}: Error finding container 33f5c4b8db06e804bf15d45cb4eb3fd06a75e01a8d4a3f1075a861d5ae3a4ee0: Status 404 returned error can't find the container with id 33f5c4b8db06e804bf15d45cb4eb3fd06a75e01a8d4a3f1075a861d5ae3a4ee0 Jan 22 10:47:28 crc kubenswrapper[4752]: I0122 10:47:28.696487 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 10:47:28 crc kubenswrapper[4752]: I0122 10:47:28.778145 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5d0f435e-1113-41cd-9154-fc1db045abe1","Type":"ContainerStarted","Data":"33f5c4b8db06e804bf15d45cb4eb3fd06a75e01a8d4a3f1075a861d5ae3a4ee0"} Jan 22 10:47:29 crc kubenswrapper[4752]: I0122 10:47:29.109749 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4df40f3f-4152-45ab-9509-e9a93221a9c5" path="/var/lib/kubelet/pods/4df40f3f-4152-45ab-9509-e9a93221a9c5/volumes" Jan 22 10:47:29 crc kubenswrapper[4752]: I0122 10:47:29.789257 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e20ee33-2442-4708-a491-c2708aaced7c","Type":"ContainerStarted","Data":"aa719c8b5278e1d317e465388c3269d6e987b4cc054b92287d00b49fe80131ff"} Jan 22 10:47:29 crc kubenswrapper[4752]: I0122 10:47:29.789372 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0e20ee33-2442-4708-a491-c2708aaced7c" containerName="ceilometer-central-agent" containerID="cri-o://bd38944f3f808f3ed67e68b89204a9b3fe33816e705be263c4006713e1e957d5" gracePeriod=30 Jan 22 10:47:29 crc kubenswrapper[4752]: I0122 10:47:29.789429 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0e20ee33-2442-4708-a491-c2708aaced7c" containerName="proxy-httpd" containerID="cri-o://aa719c8b5278e1d317e465388c3269d6e987b4cc054b92287d00b49fe80131ff" gracePeriod=30 Jan 22 10:47:29 crc kubenswrapper[4752]: I0122 10:47:29.789469 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0e20ee33-2442-4708-a491-c2708aaced7c" containerName="sg-core" containerID="cri-o://bbe75551e662c33fc2485a91cd1d216fe6885d9042dee89b68a60b5ef536167a" gracePeriod=30 Jan 22 10:47:29 crc kubenswrapper[4752]: I0122 10:47:29.789487 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0e20ee33-2442-4708-a491-c2708aaced7c" containerName="ceilometer-notification-agent" containerID="cri-o://aec4fcf0a536bf2ec7c791a4ef41cb51cc3db86bd8f7c2d56bb7d77eab7cad2c" gracePeriod=30 Jan 22 10:47:29 crc kubenswrapper[4752]: I0122 10:47:29.789672 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 10:47:29 crc kubenswrapper[4752]: I0122 10:47:29.792642 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5d0f435e-1113-41cd-9154-fc1db045abe1","Type":"ContainerStarted","Data":"c47dc9ec7a98b230bca7d4de9a9df4ddba077d5ef13f50efa77d281b78a5e8af"} Jan 22 10:47:29 crc kubenswrapper[4752]: I0122 10:47:29.792788 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5d0f435e-1113-41cd-9154-fc1db045abe1","Type":"ContainerStarted","Data":"95efb1892a250da53fe5961ce4185b7f54a8a0de3c328b6f0249f1b4b4742817"} Jan 22 10:47:29 crc kubenswrapper[4752]: I0122 10:47:29.819272 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.802713331 podStartE2EDuration="5.819252237s" podCreationTimestamp="2026-01-22 10:47:24 +0000 UTC" firstStartedPulling="2026-01-22 10:47:25.782285686 +0000 UTC m=+1325.012228594" lastFinishedPulling="2026-01-22 10:47:28.798824592 +0000 UTC m=+1328.028767500" observedRunningTime="2026-01-22 10:47:29.814818893 +0000 UTC m=+1329.044761801" watchObservedRunningTime="2026-01-22 10:47:29.819252237 +0000 UTC m=+1329.049195145" Jan 22 10:47:29 crc kubenswrapper[4752]: I0122 10:47:29.836994 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.8369784510000002 podStartE2EDuration="2.836978451s" podCreationTimestamp="2026-01-22 10:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:47:29.833418399 +0000 UTC m=+1329.063361307" watchObservedRunningTime="2026-01-22 10:47:29.836978451 +0000 UTC m=+1329.066921349" Jan 22 10:47:30 crc kubenswrapper[4752]: I0122 10:47:30.805252 4752 generic.go:334] "Generic (PLEG): container finished" podID="0e20ee33-2442-4708-a491-c2708aaced7c" containerID="aa719c8b5278e1d317e465388c3269d6e987b4cc054b92287d00b49fe80131ff" exitCode=0 Jan 22 10:47:30 crc kubenswrapper[4752]: I0122 10:47:30.805301 4752 generic.go:334] "Generic (PLEG): container finished" podID="0e20ee33-2442-4708-a491-c2708aaced7c" containerID="bbe75551e662c33fc2485a91cd1d216fe6885d9042dee89b68a60b5ef536167a" exitCode=2 Jan 22 10:47:30 crc kubenswrapper[4752]: I0122 10:47:30.805316 4752 generic.go:334] "Generic (PLEG): container finished" podID="0e20ee33-2442-4708-a491-c2708aaced7c" containerID="aec4fcf0a536bf2ec7c791a4ef41cb51cc3db86bd8f7c2d56bb7d77eab7cad2c" exitCode=0 Jan 22 10:47:30 crc kubenswrapper[4752]: I0122 10:47:30.805670 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e20ee33-2442-4708-a491-c2708aaced7c","Type":"ContainerDied","Data":"aa719c8b5278e1d317e465388c3269d6e987b4cc054b92287d00b49fe80131ff"} Jan 22 10:47:30 crc kubenswrapper[4752]: I0122 10:47:30.805712 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e20ee33-2442-4708-a491-c2708aaced7c","Type":"ContainerDied","Data":"bbe75551e662c33fc2485a91cd1d216fe6885d9042dee89b68a60b5ef536167a"} Jan 22 10:47:30 crc kubenswrapper[4752]: I0122 10:47:30.805722 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e20ee33-2442-4708-a491-c2708aaced7c","Type":"ContainerDied","Data":"aec4fcf0a536bf2ec7c791a4ef41cb51cc3db86bd8f7c2d56bb7d77eab7cad2c"} Jan 22 10:47:31 crc kubenswrapper[4752]: I0122 10:47:31.326461 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:47:31 crc kubenswrapper[4752]: I0122 10:47:31.343652 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:47:31 crc kubenswrapper[4752]: I0122 10:47:31.847266 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 22 10:47:32 crc kubenswrapper[4752]: I0122 10:47:32.009797 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-ftcrp"] Jan 22 10:47:32 crc kubenswrapper[4752]: I0122 10:47:32.011098 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ftcrp" Jan 22 10:47:32 crc kubenswrapper[4752]: I0122 10:47:32.018374 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 22 10:47:32 crc kubenswrapper[4752]: I0122 10:47:32.019004 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 22 10:47:32 crc kubenswrapper[4752]: I0122 10:47:32.027307 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-ftcrp"] Jan 22 10:47:32 crc kubenswrapper[4752]: I0122 10:47:32.138915 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4kkl\" (UniqueName: \"kubernetes.io/projected/1f13d789-0b3a-4c60-85f5-0e7d02610526-kube-api-access-b4kkl\") pod \"nova-cell1-cell-mapping-ftcrp\" (UID: \"1f13d789-0b3a-4c60-85f5-0e7d02610526\") " pod="openstack/nova-cell1-cell-mapping-ftcrp" Jan 22 10:47:32 crc kubenswrapper[4752]: I0122 10:47:32.139111 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f13d789-0b3a-4c60-85f5-0e7d02610526-scripts\") pod \"nova-cell1-cell-mapping-ftcrp\" (UID: \"1f13d789-0b3a-4c60-85f5-0e7d02610526\") " pod="openstack/nova-cell1-cell-mapping-ftcrp" Jan 22 10:47:32 crc kubenswrapper[4752]: I0122 10:47:32.139151 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f13d789-0b3a-4c60-85f5-0e7d02610526-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ftcrp\" (UID: \"1f13d789-0b3a-4c60-85f5-0e7d02610526\") " pod="openstack/nova-cell1-cell-mapping-ftcrp" Jan 22 10:47:32 crc kubenswrapper[4752]: I0122 10:47:32.139317 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f13d789-0b3a-4c60-85f5-0e7d02610526-config-data\") pod \"nova-cell1-cell-mapping-ftcrp\" (UID: \"1f13d789-0b3a-4c60-85f5-0e7d02610526\") " pod="openstack/nova-cell1-cell-mapping-ftcrp" Jan 22 10:47:32 crc kubenswrapper[4752]: I0122 10:47:32.241556 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f13d789-0b3a-4c60-85f5-0e7d02610526-scripts\") pod \"nova-cell1-cell-mapping-ftcrp\" (UID: \"1f13d789-0b3a-4c60-85f5-0e7d02610526\") " pod="openstack/nova-cell1-cell-mapping-ftcrp" Jan 22 10:47:32 crc kubenswrapper[4752]: I0122 10:47:32.241612 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f13d789-0b3a-4c60-85f5-0e7d02610526-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ftcrp\" (UID: \"1f13d789-0b3a-4c60-85f5-0e7d02610526\") " pod="openstack/nova-cell1-cell-mapping-ftcrp" Jan 22 10:47:32 crc kubenswrapper[4752]: I0122 10:47:32.241648 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f13d789-0b3a-4c60-85f5-0e7d02610526-config-data\") pod \"nova-cell1-cell-mapping-ftcrp\" (UID: \"1f13d789-0b3a-4c60-85f5-0e7d02610526\") " pod="openstack/nova-cell1-cell-mapping-ftcrp" Jan 22 10:47:32 crc kubenswrapper[4752]: I0122 10:47:32.241776 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4kkl\" (UniqueName: \"kubernetes.io/projected/1f13d789-0b3a-4c60-85f5-0e7d02610526-kube-api-access-b4kkl\") pod \"nova-cell1-cell-mapping-ftcrp\" (UID: \"1f13d789-0b3a-4c60-85f5-0e7d02610526\") " pod="openstack/nova-cell1-cell-mapping-ftcrp" Jan 22 10:47:32 crc kubenswrapper[4752]: I0122 10:47:32.249124 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f13d789-0b3a-4c60-85f5-0e7d02610526-config-data\") pod \"nova-cell1-cell-mapping-ftcrp\" (UID: \"1f13d789-0b3a-4c60-85f5-0e7d02610526\") " pod="openstack/nova-cell1-cell-mapping-ftcrp" Jan 22 10:47:32 crc kubenswrapper[4752]: I0122 10:47:32.249124 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f13d789-0b3a-4c60-85f5-0e7d02610526-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ftcrp\" (UID: \"1f13d789-0b3a-4c60-85f5-0e7d02610526\") " pod="openstack/nova-cell1-cell-mapping-ftcrp" Jan 22 10:47:32 crc kubenswrapper[4752]: I0122 10:47:32.253583 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f13d789-0b3a-4c60-85f5-0e7d02610526-scripts\") pod \"nova-cell1-cell-mapping-ftcrp\" (UID: \"1f13d789-0b3a-4c60-85f5-0e7d02610526\") " pod="openstack/nova-cell1-cell-mapping-ftcrp" Jan 22 10:47:32 crc kubenswrapper[4752]: I0122 10:47:32.261094 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4kkl\" (UniqueName: \"kubernetes.io/projected/1f13d789-0b3a-4c60-85f5-0e7d02610526-kube-api-access-b4kkl\") pod \"nova-cell1-cell-mapping-ftcrp\" (UID: \"1f13d789-0b3a-4c60-85f5-0e7d02610526\") " pod="openstack/nova-cell1-cell-mapping-ftcrp" Jan 22 10:47:32 crc kubenswrapper[4752]: I0122 10:47:32.333254 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ftcrp" Jan 22 10:47:32 crc kubenswrapper[4752]: I0122 10:47:32.740945 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fcfbd48f-4mxp4" Jan 22 10:47:32 crc kubenswrapper[4752]: I0122 10:47:32.812593 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7995555d47-n9qlx"] Jan 22 10:47:32 crc kubenswrapper[4752]: I0122 10:47:32.812849 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7995555d47-n9qlx" podUID="71047b15-65b1-4b7d-ab73-effd16c9aa8a" containerName="dnsmasq-dns" containerID="cri-o://180a4898828bf0feff15a0c0513d50f30eeb85109faf23d5449e1e4a72c67a5e" gracePeriod=10 Jan 22 10:47:32 crc kubenswrapper[4752]: I0122 10:47:32.890848 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-ftcrp"] Jan 22 10:47:33 crc kubenswrapper[4752]: I0122 10:47:33.402420 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7995555d47-n9qlx" Jan 22 10:47:33 crc kubenswrapper[4752]: I0122 10:47:33.565584 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71047b15-65b1-4b7d-ab73-effd16c9aa8a-ovsdbserver-sb\") pod \"71047b15-65b1-4b7d-ab73-effd16c9aa8a\" (UID: \"71047b15-65b1-4b7d-ab73-effd16c9aa8a\") " Jan 22 10:47:33 crc kubenswrapper[4752]: I0122 10:47:33.565658 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71047b15-65b1-4b7d-ab73-effd16c9aa8a-config\") pod \"71047b15-65b1-4b7d-ab73-effd16c9aa8a\" (UID: \"71047b15-65b1-4b7d-ab73-effd16c9aa8a\") " Jan 22 10:47:33 crc kubenswrapper[4752]: I0122 10:47:33.565686 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71047b15-65b1-4b7d-ab73-effd16c9aa8a-ovsdbserver-nb\") pod \"71047b15-65b1-4b7d-ab73-effd16c9aa8a\" (UID: \"71047b15-65b1-4b7d-ab73-effd16c9aa8a\") " Jan 22 10:47:33 crc kubenswrapper[4752]: I0122 10:47:33.565733 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rr9l\" (UniqueName: \"kubernetes.io/projected/71047b15-65b1-4b7d-ab73-effd16c9aa8a-kube-api-access-2rr9l\") pod \"71047b15-65b1-4b7d-ab73-effd16c9aa8a\" (UID: \"71047b15-65b1-4b7d-ab73-effd16c9aa8a\") " Jan 22 10:47:33 crc kubenswrapper[4752]: I0122 10:47:33.565777 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71047b15-65b1-4b7d-ab73-effd16c9aa8a-dns-svc\") pod \"71047b15-65b1-4b7d-ab73-effd16c9aa8a\" (UID: \"71047b15-65b1-4b7d-ab73-effd16c9aa8a\") " Jan 22 10:47:33 crc kubenswrapper[4752]: I0122 10:47:33.565935 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/71047b15-65b1-4b7d-ab73-effd16c9aa8a-dns-swift-storage-0\") pod \"71047b15-65b1-4b7d-ab73-effd16c9aa8a\" (UID: \"71047b15-65b1-4b7d-ab73-effd16c9aa8a\") " Jan 22 10:47:33 crc kubenswrapper[4752]: I0122 10:47:33.570930 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71047b15-65b1-4b7d-ab73-effd16c9aa8a-kube-api-access-2rr9l" (OuterVolumeSpecName: "kube-api-access-2rr9l") pod "71047b15-65b1-4b7d-ab73-effd16c9aa8a" (UID: "71047b15-65b1-4b7d-ab73-effd16c9aa8a"). InnerVolumeSpecName "kube-api-access-2rr9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:47:33 crc kubenswrapper[4752]: I0122 10:47:33.621851 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71047b15-65b1-4b7d-ab73-effd16c9aa8a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "71047b15-65b1-4b7d-ab73-effd16c9aa8a" (UID: "71047b15-65b1-4b7d-ab73-effd16c9aa8a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:47:33 crc kubenswrapper[4752]: I0122 10:47:33.626221 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71047b15-65b1-4b7d-ab73-effd16c9aa8a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "71047b15-65b1-4b7d-ab73-effd16c9aa8a" (UID: "71047b15-65b1-4b7d-ab73-effd16c9aa8a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:47:33 crc kubenswrapper[4752]: I0122 10:47:33.637698 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71047b15-65b1-4b7d-ab73-effd16c9aa8a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "71047b15-65b1-4b7d-ab73-effd16c9aa8a" (UID: "71047b15-65b1-4b7d-ab73-effd16c9aa8a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:47:33 crc kubenswrapper[4752]: I0122 10:47:33.649449 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71047b15-65b1-4b7d-ab73-effd16c9aa8a-config" (OuterVolumeSpecName: "config") pod "71047b15-65b1-4b7d-ab73-effd16c9aa8a" (UID: "71047b15-65b1-4b7d-ab73-effd16c9aa8a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:47:33 crc kubenswrapper[4752]: I0122 10:47:33.657237 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71047b15-65b1-4b7d-ab73-effd16c9aa8a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "71047b15-65b1-4b7d-ab73-effd16c9aa8a" (UID: "71047b15-65b1-4b7d-ab73-effd16c9aa8a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:47:33 crc kubenswrapper[4752]: I0122 10:47:33.668227 4752 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/71047b15-65b1-4b7d-ab73-effd16c9aa8a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:33 crc kubenswrapper[4752]: I0122 10:47:33.668269 4752 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71047b15-65b1-4b7d-ab73-effd16c9aa8a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:33 crc kubenswrapper[4752]: I0122 10:47:33.668285 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71047b15-65b1-4b7d-ab73-effd16c9aa8a-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:33 crc kubenswrapper[4752]: I0122 10:47:33.668297 4752 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71047b15-65b1-4b7d-ab73-effd16c9aa8a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:33 crc kubenswrapper[4752]: I0122 10:47:33.668309 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rr9l\" (UniqueName: \"kubernetes.io/projected/71047b15-65b1-4b7d-ab73-effd16c9aa8a-kube-api-access-2rr9l\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:33 crc kubenswrapper[4752]: I0122 10:47:33.668321 4752 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71047b15-65b1-4b7d-ab73-effd16c9aa8a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:33 crc kubenswrapper[4752]: I0122 10:47:33.873463 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ftcrp" event={"ID":"1f13d789-0b3a-4c60-85f5-0e7d02610526","Type":"ContainerStarted","Data":"a3d670a343fa7cc0af2d23aa6d17b844e314e9908c3a821aecedab20cca99cd5"} Jan 22 10:47:33 crc kubenswrapper[4752]: I0122 10:47:33.873742 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ftcrp" event={"ID":"1f13d789-0b3a-4c60-85f5-0e7d02610526","Type":"ContainerStarted","Data":"7be1fa3121c60eae26e8c9e21677a186290bcc764126bc70d9e3882ba874ac9b"} Jan 22 10:47:33 crc kubenswrapper[4752]: I0122 10:47:33.882182 4752 generic.go:334] "Generic (PLEG): container finished" podID="71047b15-65b1-4b7d-ab73-effd16c9aa8a" containerID="180a4898828bf0feff15a0c0513d50f30eeb85109faf23d5449e1e4a72c67a5e" exitCode=0 Jan 22 10:47:33 crc kubenswrapper[4752]: I0122 10:47:33.882245 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7995555d47-n9qlx" event={"ID":"71047b15-65b1-4b7d-ab73-effd16c9aa8a","Type":"ContainerDied","Data":"180a4898828bf0feff15a0c0513d50f30eeb85109faf23d5449e1e4a72c67a5e"} Jan 22 10:47:33 crc kubenswrapper[4752]: I0122 10:47:33.882283 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7995555d47-n9qlx" event={"ID":"71047b15-65b1-4b7d-ab73-effd16c9aa8a","Type":"ContainerDied","Data":"655598428e7daabf635e443917da8191c0b3e5ccf4bcc22ef7afd5150584eca9"} Jan 22 10:47:33 crc kubenswrapper[4752]: I0122 10:47:33.882315 4752 scope.go:117] "RemoveContainer" containerID="180a4898828bf0feff15a0c0513d50f30eeb85109faf23d5449e1e4a72c67a5e" Jan 22 10:47:33 crc kubenswrapper[4752]: I0122 10:47:33.882489 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7995555d47-n9qlx" Jan 22 10:47:33 crc kubenswrapper[4752]: I0122 10:47:33.912713 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-ftcrp" podStartSLOduration=2.912692594 podStartE2EDuration="2.912692594s" podCreationTimestamp="2026-01-22 10:47:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:47:33.894523089 +0000 UTC m=+1333.124466027" watchObservedRunningTime="2026-01-22 10:47:33.912692594 +0000 UTC m=+1333.142635502" Jan 22 10:47:33 crc kubenswrapper[4752]: I0122 10:47:33.929038 4752 scope.go:117] "RemoveContainer" containerID="db4d0388cfc5f66d28abb18ea1b4e43ee689feb237449e81071e415a537f0273" Jan 22 10:47:33 crc kubenswrapper[4752]: I0122 10:47:33.951308 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7995555d47-n9qlx"] Jan 22 10:47:33 crc kubenswrapper[4752]: I0122 10:47:33.963141 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7995555d47-n9qlx"] Jan 22 10:47:33 crc kubenswrapper[4752]: I0122 10:47:33.983701 4752 scope.go:117] "RemoveContainer" containerID="180a4898828bf0feff15a0c0513d50f30eeb85109faf23d5449e1e4a72c67a5e" Jan 22 10:47:33 crc kubenswrapper[4752]: E0122 10:47:33.990173 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"180a4898828bf0feff15a0c0513d50f30eeb85109faf23d5449e1e4a72c67a5e\": container with ID starting with 180a4898828bf0feff15a0c0513d50f30eeb85109faf23d5449e1e4a72c67a5e not found: ID does not exist" containerID="180a4898828bf0feff15a0c0513d50f30eeb85109faf23d5449e1e4a72c67a5e" Jan 22 10:47:33 crc kubenswrapper[4752]: I0122 10:47:33.990228 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"180a4898828bf0feff15a0c0513d50f30eeb85109faf23d5449e1e4a72c67a5e"} err="failed to get container status \"180a4898828bf0feff15a0c0513d50f30eeb85109faf23d5449e1e4a72c67a5e\": rpc error: code = NotFound desc = could not find container \"180a4898828bf0feff15a0c0513d50f30eeb85109faf23d5449e1e4a72c67a5e\": container with ID starting with 180a4898828bf0feff15a0c0513d50f30eeb85109faf23d5449e1e4a72c67a5e not found: ID does not exist" Jan 22 10:47:33 crc kubenswrapper[4752]: I0122 10:47:33.990260 4752 scope.go:117] "RemoveContainer" containerID="db4d0388cfc5f66d28abb18ea1b4e43ee689feb237449e81071e415a537f0273" Jan 22 10:47:33 crc kubenswrapper[4752]: E0122 10:47:33.993780 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db4d0388cfc5f66d28abb18ea1b4e43ee689feb237449e81071e415a537f0273\": container with ID starting with db4d0388cfc5f66d28abb18ea1b4e43ee689feb237449e81071e415a537f0273 not found: ID does not exist" containerID="db4d0388cfc5f66d28abb18ea1b4e43ee689feb237449e81071e415a537f0273" Jan 22 10:47:33 crc kubenswrapper[4752]: I0122 10:47:33.993829 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db4d0388cfc5f66d28abb18ea1b4e43ee689feb237449e81071e415a537f0273"} err="failed to get container status \"db4d0388cfc5f66d28abb18ea1b4e43ee689feb237449e81071e415a537f0273\": rpc error: code = NotFound desc = could not find container \"db4d0388cfc5f66d28abb18ea1b4e43ee689feb237449e81071e415a537f0273\": container with ID starting with db4d0388cfc5f66d28abb18ea1b4e43ee689feb237449e81071e415a537f0273 not found: ID does not exist" Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.427390 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.589324 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e20ee33-2442-4708-a491-c2708aaced7c-log-httpd\") pod \"0e20ee33-2442-4708-a491-c2708aaced7c\" (UID: \"0e20ee33-2442-4708-a491-c2708aaced7c\") " Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.589381 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e20ee33-2442-4708-a491-c2708aaced7c-ceilometer-tls-certs\") pod \"0e20ee33-2442-4708-a491-c2708aaced7c\" (UID: \"0e20ee33-2442-4708-a491-c2708aaced7c\") " Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.589427 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e20ee33-2442-4708-a491-c2708aaced7c-combined-ca-bundle\") pod \"0e20ee33-2442-4708-a491-c2708aaced7c\" (UID: \"0e20ee33-2442-4708-a491-c2708aaced7c\") " Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.589537 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e20ee33-2442-4708-a491-c2708aaced7c-config-data\") pod \"0e20ee33-2442-4708-a491-c2708aaced7c\" (UID: \"0e20ee33-2442-4708-a491-c2708aaced7c\") " Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.589600 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e20ee33-2442-4708-a491-c2708aaced7c-run-httpd\") pod \"0e20ee33-2442-4708-a491-c2708aaced7c\" (UID: \"0e20ee33-2442-4708-a491-c2708aaced7c\") " Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.589683 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e20ee33-2442-4708-a491-c2708aaced7c-sg-core-conf-yaml\") pod \"0e20ee33-2442-4708-a491-c2708aaced7c\" (UID: \"0e20ee33-2442-4708-a491-c2708aaced7c\") " Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.589714 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e20ee33-2442-4708-a491-c2708aaced7c-scripts\") pod \"0e20ee33-2442-4708-a491-c2708aaced7c\" (UID: \"0e20ee33-2442-4708-a491-c2708aaced7c\") " Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.589729 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f886\" (UniqueName: \"kubernetes.io/projected/0e20ee33-2442-4708-a491-c2708aaced7c-kube-api-access-5f886\") pod \"0e20ee33-2442-4708-a491-c2708aaced7c\" (UID: \"0e20ee33-2442-4708-a491-c2708aaced7c\") " Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.589909 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e20ee33-2442-4708-a491-c2708aaced7c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0e20ee33-2442-4708-a491-c2708aaced7c" (UID: "0e20ee33-2442-4708-a491-c2708aaced7c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.590287 4752 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e20ee33-2442-4708-a491-c2708aaced7c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.590407 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e20ee33-2442-4708-a491-c2708aaced7c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0e20ee33-2442-4708-a491-c2708aaced7c" (UID: "0e20ee33-2442-4708-a491-c2708aaced7c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.595588 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e20ee33-2442-4708-a491-c2708aaced7c-kube-api-access-5f886" (OuterVolumeSpecName: "kube-api-access-5f886") pod "0e20ee33-2442-4708-a491-c2708aaced7c" (UID: "0e20ee33-2442-4708-a491-c2708aaced7c"). InnerVolumeSpecName "kube-api-access-5f886". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.596441 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e20ee33-2442-4708-a491-c2708aaced7c-scripts" (OuterVolumeSpecName: "scripts") pod "0e20ee33-2442-4708-a491-c2708aaced7c" (UID: "0e20ee33-2442-4708-a491-c2708aaced7c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.620683 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e20ee33-2442-4708-a491-c2708aaced7c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0e20ee33-2442-4708-a491-c2708aaced7c" (UID: "0e20ee33-2442-4708-a491-c2708aaced7c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.666311 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e20ee33-2442-4708-a491-c2708aaced7c-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "0e20ee33-2442-4708-a491-c2708aaced7c" (UID: "0e20ee33-2442-4708-a491-c2708aaced7c"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.692171 4752 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e20ee33-2442-4708-a491-c2708aaced7c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.692214 4752 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e20ee33-2442-4708-a491-c2708aaced7c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.692228 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5f886\" (UniqueName: \"kubernetes.io/projected/0e20ee33-2442-4708-a491-c2708aaced7c-kube-api-access-5f886\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.692241 4752 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e20ee33-2442-4708-a491-c2708aaced7c-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.692252 4752 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e20ee33-2442-4708-a491-c2708aaced7c-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.702747 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e20ee33-2442-4708-a491-c2708aaced7c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e20ee33-2442-4708-a491-c2708aaced7c" (UID: "0e20ee33-2442-4708-a491-c2708aaced7c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.737264 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e20ee33-2442-4708-a491-c2708aaced7c-config-data" (OuterVolumeSpecName: "config-data") pod "0e20ee33-2442-4708-a491-c2708aaced7c" (UID: "0e20ee33-2442-4708-a491-c2708aaced7c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.794161 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e20ee33-2442-4708-a491-c2708aaced7c-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.794201 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e20ee33-2442-4708-a491-c2708aaced7c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.899375 4752 generic.go:334] "Generic (PLEG): container finished" podID="0e20ee33-2442-4708-a491-c2708aaced7c" containerID="bd38944f3f808f3ed67e68b89204a9b3fe33816e705be263c4006713e1e957d5" exitCode=0 Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.899469 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e20ee33-2442-4708-a491-c2708aaced7c","Type":"ContainerDied","Data":"bd38944f3f808f3ed67e68b89204a9b3fe33816e705be263c4006713e1e957d5"} Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.899539 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e20ee33-2442-4708-a491-c2708aaced7c","Type":"ContainerDied","Data":"3d1b0e62fee62093944a18f4c516364a6013f1927006ed16b9da34e16a01ab24"} Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.899584 4752 scope.go:117] "RemoveContainer" containerID="aa719c8b5278e1d317e465388c3269d6e987b4cc054b92287d00b49fe80131ff" Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.899816 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.937555 4752 scope.go:117] "RemoveContainer" containerID="bbe75551e662c33fc2485a91cd1d216fe6885d9042dee89b68a60b5ef536167a" Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.978925 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.983094 4752 scope.go:117] "RemoveContainer" containerID="aec4fcf0a536bf2ec7c791a4ef41cb51cc3db86bd8f7c2d56bb7d77eab7cad2c" Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.989633 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.998315 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:47:34 crc kubenswrapper[4752]: E0122 10:47:34.998836 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e20ee33-2442-4708-a491-c2708aaced7c" containerName="sg-core" Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.998876 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e20ee33-2442-4708-a491-c2708aaced7c" containerName="sg-core" Jan 22 10:47:34 crc kubenswrapper[4752]: E0122 10:47:34.998895 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e20ee33-2442-4708-a491-c2708aaced7c" containerName="ceilometer-notification-agent" Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.998904 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e20ee33-2442-4708-a491-c2708aaced7c" containerName="ceilometer-notification-agent" Jan 22 10:47:34 crc kubenswrapper[4752]: E0122 10:47:34.998917 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e20ee33-2442-4708-a491-c2708aaced7c" containerName="ceilometer-central-agent" Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.998928 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e20ee33-2442-4708-a491-c2708aaced7c" containerName="ceilometer-central-agent" Jan 22 10:47:34 crc kubenswrapper[4752]: E0122 10:47:34.998941 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71047b15-65b1-4b7d-ab73-effd16c9aa8a" containerName="init" Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.998950 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="71047b15-65b1-4b7d-ab73-effd16c9aa8a" containerName="init" Jan 22 10:47:34 crc kubenswrapper[4752]: E0122 10:47:34.998964 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71047b15-65b1-4b7d-ab73-effd16c9aa8a" containerName="dnsmasq-dns" Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.998972 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="71047b15-65b1-4b7d-ab73-effd16c9aa8a" containerName="dnsmasq-dns" Jan 22 10:47:34 crc kubenswrapper[4752]: E0122 10:47:34.999012 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e20ee33-2442-4708-a491-c2708aaced7c" containerName="proxy-httpd" Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.999020 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e20ee33-2442-4708-a491-c2708aaced7c" containerName="proxy-httpd" Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.999246 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e20ee33-2442-4708-a491-c2708aaced7c" containerName="sg-core" Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.999273 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="71047b15-65b1-4b7d-ab73-effd16c9aa8a" containerName="dnsmasq-dns" Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.999290 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e20ee33-2442-4708-a491-c2708aaced7c" containerName="ceilometer-notification-agent" Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.999307 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e20ee33-2442-4708-a491-c2708aaced7c" containerName="ceilometer-central-agent" Jan 22 10:47:34 crc kubenswrapper[4752]: I0122 10:47:34.999327 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e20ee33-2442-4708-a491-c2708aaced7c" containerName="proxy-httpd" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.001659 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.004344 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.006115 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.006378 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.008775 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.027927 4752 scope.go:117] "RemoveContainer" containerID="bd38944f3f808f3ed67e68b89204a9b3fe33816e705be263c4006713e1e957d5" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.063720 4752 scope.go:117] "RemoveContainer" containerID="aa719c8b5278e1d317e465388c3269d6e987b4cc054b92287d00b49fe80131ff" Jan 22 10:47:35 crc kubenswrapper[4752]: E0122 10:47:35.065193 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa719c8b5278e1d317e465388c3269d6e987b4cc054b92287d00b49fe80131ff\": container with ID starting with aa719c8b5278e1d317e465388c3269d6e987b4cc054b92287d00b49fe80131ff not found: ID does not exist" containerID="aa719c8b5278e1d317e465388c3269d6e987b4cc054b92287d00b49fe80131ff" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.065234 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa719c8b5278e1d317e465388c3269d6e987b4cc054b92287d00b49fe80131ff"} err="failed to get container status \"aa719c8b5278e1d317e465388c3269d6e987b4cc054b92287d00b49fe80131ff\": rpc error: code = NotFound desc = could not find container \"aa719c8b5278e1d317e465388c3269d6e987b4cc054b92287d00b49fe80131ff\": container with ID starting with aa719c8b5278e1d317e465388c3269d6e987b4cc054b92287d00b49fe80131ff not found: ID does not exist" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.065264 4752 scope.go:117] "RemoveContainer" containerID="bbe75551e662c33fc2485a91cd1d216fe6885d9042dee89b68a60b5ef536167a" Jan 22 10:47:35 crc kubenswrapper[4752]: E0122 10:47:35.065591 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbe75551e662c33fc2485a91cd1d216fe6885d9042dee89b68a60b5ef536167a\": container with ID starting with bbe75551e662c33fc2485a91cd1d216fe6885d9042dee89b68a60b5ef536167a not found: ID does not exist" containerID="bbe75551e662c33fc2485a91cd1d216fe6885d9042dee89b68a60b5ef536167a" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.065637 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbe75551e662c33fc2485a91cd1d216fe6885d9042dee89b68a60b5ef536167a"} err="failed to get container status \"bbe75551e662c33fc2485a91cd1d216fe6885d9042dee89b68a60b5ef536167a\": rpc error: code = NotFound desc = could not find container \"bbe75551e662c33fc2485a91cd1d216fe6885d9042dee89b68a60b5ef536167a\": container with ID starting with bbe75551e662c33fc2485a91cd1d216fe6885d9042dee89b68a60b5ef536167a not found: ID does not exist" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.065884 4752 scope.go:117] "RemoveContainer" containerID="aec4fcf0a536bf2ec7c791a4ef41cb51cc3db86bd8f7c2d56bb7d77eab7cad2c" Jan 22 10:47:35 crc kubenswrapper[4752]: E0122 10:47:35.066207 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aec4fcf0a536bf2ec7c791a4ef41cb51cc3db86bd8f7c2d56bb7d77eab7cad2c\": container with ID starting with aec4fcf0a536bf2ec7c791a4ef41cb51cc3db86bd8f7c2d56bb7d77eab7cad2c not found: ID does not exist" containerID="aec4fcf0a536bf2ec7c791a4ef41cb51cc3db86bd8f7c2d56bb7d77eab7cad2c" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.066227 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aec4fcf0a536bf2ec7c791a4ef41cb51cc3db86bd8f7c2d56bb7d77eab7cad2c"} err="failed to get container status \"aec4fcf0a536bf2ec7c791a4ef41cb51cc3db86bd8f7c2d56bb7d77eab7cad2c\": rpc error: code = NotFound desc = could not find container \"aec4fcf0a536bf2ec7c791a4ef41cb51cc3db86bd8f7c2d56bb7d77eab7cad2c\": container with ID starting with aec4fcf0a536bf2ec7c791a4ef41cb51cc3db86bd8f7c2d56bb7d77eab7cad2c not found: ID does not exist" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.066241 4752 scope.go:117] "RemoveContainer" containerID="bd38944f3f808f3ed67e68b89204a9b3fe33816e705be263c4006713e1e957d5" Jan 22 10:47:35 crc kubenswrapper[4752]: E0122 10:47:35.066431 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd38944f3f808f3ed67e68b89204a9b3fe33816e705be263c4006713e1e957d5\": container with ID starting with bd38944f3f808f3ed67e68b89204a9b3fe33816e705be263c4006713e1e957d5 not found: ID does not exist" containerID="bd38944f3f808f3ed67e68b89204a9b3fe33816e705be263c4006713e1e957d5" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.066455 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd38944f3f808f3ed67e68b89204a9b3fe33816e705be263c4006713e1e957d5"} err="failed to get container status \"bd38944f3f808f3ed67e68b89204a9b3fe33816e705be263c4006713e1e957d5\": rpc error: code = NotFound desc = could not find container \"bd38944f3f808f3ed67e68b89204a9b3fe33816e705be263c4006713e1e957d5\": container with ID starting with bd38944f3f808f3ed67e68b89204a9b3fe33816e705be263c4006713e1e957d5 not found: ID does not exist" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.101187 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/71b9081b-e283-49c6-8566-ff644b67f461-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"71b9081b-e283-49c6-8566-ff644b67f461\") " pod="openstack/ceilometer-0" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.101287 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71b9081b-e283-49c6-8566-ff644b67f461-run-httpd\") pod \"ceilometer-0\" (UID: \"71b9081b-e283-49c6-8566-ff644b67f461\") " pod="openstack/ceilometer-0" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.101314 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71b9081b-e283-49c6-8566-ff644b67f461-log-httpd\") pod \"ceilometer-0\" (UID: \"71b9081b-e283-49c6-8566-ff644b67f461\") " pod="openstack/ceilometer-0" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.101334 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71b9081b-e283-49c6-8566-ff644b67f461-config-data\") pod \"ceilometer-0\" (UID: \"71b9081b-e283-49c6-8566-ff644b67f461\") " pod="openstack/ceilometer-0" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.101379 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/71b9081b-e283-49c6-8566-ff644b67f461-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"71b9081b-e283-49c6-8566-ff644b67f461\") " pod="openstack/ceilometer-0" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.101435 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bjx7\" (UniqueName: \"kubernetes.io/projected/71b9081b-e283-49c6-8566-ff644b67f461-kube-api-access-8bjx7\") pod \"ceilometer-0\" (UID: \"71b9081b-e283-49c6-8566-ff644b67f461\") " pod="openstack/ceilometer-0" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.101529 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71b9081b-e283-49c6-8566-ff644b67f461-scripts\") pod \"ceilometer-0\" (UID: \"71b9081b-e283-49c6-8566-ff644b67f461\") " pod="openstack/ceilometer-0" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.101600 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b9081b-e283-49c6-8566-ff644b67f461-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"71b9081b-e283-49c6-8566-ff644b67f461\") " pod="openstack/ceilometer-0" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.110280 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e20ee33-2442-4708-a491-c2708aaced7c" path="/var/lib/kubelet/pods/0e20ee33-2442-4708-a491-c2708aaced7c/volumes" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.111129 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71047b15-65b1-4b7d-ab73-effd16c9aa8a" path="/var/lib/kubelet/pods/71047b15-65b1-4b7d-ab73-effd16c9aa8a/volumes" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.203243 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71b9081b-e283-49c6-8566-ff644b67f461-run-httpd\") pod \"ceilometer-0\" (UID: \"71b9081b-e283-49c6-8566-ff644b67f461\") " pod="openstack/ceilometer-0" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.203300 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71b9081b-e283-49c6-8566-ff644b67f461-log-httpd\") pod \"ceilometer-0\" (UID: \"71b9081b-e283-49c6-8566-ff644b67f461\") " pod="openstack/ceilometer-0" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.203330 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71b9081b-e283-49c6-8566-ff644b67f461-config-data\") pod \"ceilometer-0\" (UID: \"71b9081b-e283-49c6-8566-ff644b67f461\") " pod="openstack/ceilometer-0" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.203426 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/71b9081b-e283-49c6-8566-ff644b67f461-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"71b9081b-e283-49c6-8566-ff644b67f461\") " pod="openstack/ceilometer-0" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.203458 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bjx7\" (UniqueName: \"kubernetes.io/projected/71b9081b-e283-49c6-8566-ff644b67f461-kube-api-access-8bjx7\") pod \"ceilometer-0\" (UID: \"71b9081b-e283-49c6-8566-ff644b67f461\") " pod="openstack/ceilometer-0" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.203489 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71b9081b-e283-49c6-8566-ff644b67f461-scripts\") pod \"ceilometer-0\" (UID: \"71b9081b-e283-49c6-8566-ff644b67f461\") " pod="openstack/ceilometer-0" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.203520 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b9081b-e283-49c6-8566-ff644b67f461-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"71b9081b-e283-49c6-8566-ff644b67f461\") " pod="openstack/ceilometer-0" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.203588 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/71b9081b-e283-49c6-8566-ff644b67f461-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"71b9081b-e283-49c6-8566-ff644b67f461\") " pod="openstack/ceilometer-0" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.205739 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71b9081b-e283-49c6-8566-ff644b67f461-run-httpd\") pod \"ceilometer-0\" (UID: \"71b9081b-e283-49c6-8566-ff644b67f461\") " pod="openstack/ceilometer-0" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.206207 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71b9081b-e283-49c6-8566-ff644b67f461-log-httpd\") pod \"ceilometer-0\" (UID: \"71b9081b-e283-49c6-8566-ff644b67f461\") " pod="openstack/ceilometer-0" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.209264 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71b9081b-e283-49c6-8566-ff644b67f461-config-data\") pod \"ceilometer-0\" (UID: \"71b9081b-e283-49c6-8566-ff644b67f461\") " pod="openstack/ceilometer-0" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.209692 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71b9081b-e283-49c6-8566-ff644b67f461-scripts\") pod \"ceilometer-0\" (UID: \"71b9081b-e283-49c6-8566-ff644b67f461\") " pod="openstack/ceilometer-0" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.211902 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b9081b-e283-49c6-8566-ff644b67f461-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"71b9081b-e283-49c6-8566-ff644b67f461\") " pod="openstack/ceilometer-0" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.212344 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/71b9081b-e283-49c6-8566-ff644b67f461-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"71b9081b-e283-49c6-8566-ff644b67f461\") " pod="openstack/ceilometer-0" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.222445 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/71b9081b-e283-49c6-8566-ff644b67f461-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"71b9081b-e283-49c6-8566-ff644b67f461\") " pod="openstack/ceilometer-0" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.225525 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bjx7\" (UniqueName: \"kubernetes.io/projected/71b9081b-e283-49c6-8566-ff644b67f461-kube-api-access-8bjx7\") pod \"ceilometer-0\" (UID: \"71b9081b-e283-49c6-8566-ff644b67f461\") " pod="openstack/ceilometer-0" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.329077 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.798620 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 10:47:35 crc kubenswrapper[4752]: I0122 10:47:35.914132 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71b9081b-e283-49c6-8566-ff644b67f461","Type":"ContainerStarted","Data":"ffbefe230edb79fa7cb786e7ae962da418a2db28133588bf9dca15f072f33951"} Jan 22 10:47:37 crc kubenswrapper[4752]: I0122 10:47:37.979317 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71b9081b-e283-49c6-8566-ff644b67f461","Type":"ContainerStarted","Data":"f0fdb19aafd989becf6691df2a260d4333529cc02cad57bd8f6208c6ca95d3b2"} Jan 22 10:47:38 crc kubenswrapper[4752]: I0122 10:47:38.187716 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 10:47:38 crc kubenswrapper[4752]: I0122 10:47:38.187780 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 10:47:39 crc kubenswrapper[4752]: I0122 10:47:39.195107 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5d0f435e-1113-41cd-9154-fc1db045abe1" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 10:47:39 crc kubenswrapper[4752]: I0122 10:47:39.202059 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5d0f435e-1113-41cd-9154-fc1db045abe1" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 10:47:40 crc kubenswrapper[4752]: I0122 10:47:40.003290 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71b9081b-e283-49c6-8566-ff644b67f461","Type":"ContainerStarted","Data":"4ff8e6a6e2b8acafb11b871218d1cb97b17464134b89bb22bbfcafe8a8a04665"} Jan 22 10:47:40 crc kubenswrapper[4752]: I0122 10:47:40.005931 4752 generic.go:334] "Generic (PLEG): container finished" podID="1f13d789-0b3a-4c60-85f5-0e7d02610526" containerID="a3d670a343fa7cc0af2d23aa6d17b844e314e9908c3a821aecedab20cca99cd5" exitCode=0 Jan 22 10:47:40 crc kubenswrapper[4752]: I0122 10:47:40.005977 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ftcrp" event={"ID":"1f13d789-0b3a-4c60-85f5-0e7d02610526","Type":"ContainerDied","Data":"a3d670a343fa7cc0af2d23aa6d17b844e314e9908c3a821aecedab20cca99cd5"} Jan 22 10:47:41 crc kubenswrapper[4752]: I0122 10:47:41.020179 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71b9081b-e283-49c6-8566-ff644b67f461","Type":"ContainerStarted","Data":"138aca69381913fd1c0055ca5e14010a6535b74d851c5b86914cb9ac3e1dbffd"} Jan 22 10:47:41 crc kubenswrapper[4752]: I0122 10:47:41.501417 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ftcrp" Jan 22 10:47:41 crc kubenswrapper[4752]: I0122 10:47:41.650887 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f13d789-0b3a-4c60-85f5-0e7d02610526-config-data\") pod \"1f13d789-0b3a-4c60-85f5-0e7d02610526\" (UID: \"1f13d789-0b3a-4c60-85f5-0e7d02610526\") " Jan 22 10:47:41 crc kubenswrapper[4752]: I0122 10:47:41.651258 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f13d789-0b3a-4c60-85f5-0e7d02610526-combined-ca-bundle\") pod \"1f13d789-0b3a-4c60-85f5-0e7d02610526\" (UID: \"1f13d789-0b3a-4c60-85f5-0e7d02610526\") " Jan 22 10:47:41 crc kubenswrapper[4752]: I0122 10:47:41.651306 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f13d789-0b3a-4c60-85f5-0e7d02610526-scripts\") pod \"1f13d789-0b3a-4c60-85f5-0e7d02610526\" (UID: \"1f13d789-0b3a-4c60-85f5-0e7d02610526\") " Jan 22 10:47:41 crc kubenswrapper[4752]: I0122 10:47:41.651394 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4kkl\" (UniqueName: \"kubernetes.io/projected/1f13d789-0b3a-4c60-85f5-0e7d02610526-kube-api-access-b4kkl\") pod \"1f13d789-0b3a-4c60-85f5-0e7d02610526\" (UID: \"1f13d789-0b3a-4c60-85f5-0e7d02610526\") " Jan 22 10:47:41 crc kubenswrapper[4752]: I0122 10:47:41.657040 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f13d789-0b3a-4c60-85f5-0e7d02610526-kube-api-access-b4kkl" (OuterVolumeSpecName: "kube-api-access-b4kkl") pod "1f13d789-0b3a-4c60-85f5-0e7d02610526" (UID: "1f13d789-0b3a-4c60-85f5-0e7d02610526"). InnerVolumeSpecName "kube-api-access-b4kkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:47:41 crc kubenswrapper[4752]: I0122 10:47:41.661037 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f13d789-0b3a-4c60-85f5-0e7d02610526-scripts" (OuterVolumeSpecName: "scripts") pod "1f13d789-0b3a-4c60-85f5-0e7d02610526" (UID: "1f13d789-0b3a-4c60-85f5-0e7d02610526"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:47:41 crc kubenswrapper[4752]: I0122 10:47:41.682732 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f13d789-0b3a-4c60-85f5-0e7d02610526-config-data" (OuterVolumeSpecName: "config-data") pod "1f13d789-0b3a-4c60-85f5-0e7d02610526" (UID: "1f13d789-0b3a-4c60-85f5-0e7d02610526"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:47:41 crc kubenswrapper[4752]: I0122 10:47:41.693246 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f13d789-0b3a-4c60-85f5-0e7d02610526-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f13d789-0b3a-4c60-85f5-0e7d02610526" (UID: "1f13d789-0b3a-4c60-85f5-0e7d02610526"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:47:41 crc kubenswrapper[4752]: I0122 10:47:41.753873 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f13d789-0b3a-4c60-85f5-0e7d02610526-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:41 crc kubenswrapper[4752]: I0122 10:47:41.753913 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f13d789-0b3a-4c60-85f5-0e7d02610526-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:41 crc kubenswrapper[4752]: I0122 10:47:41.753923 4752 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f13d789-0b3a-4c60-85f5-0e7d02610526-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:41 crc kubenswrapper[4752]: I0122 10:47:41.753931 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4kkl\" (UniqueName: \"kubernetes.io/projected/1f13d789-0b3a-4c60-85f5-0e7d02610526-kube-api-access-b4kkl\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:42 crc kubenswrapper[4752]: I0122 10:47:42.031866 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ftcrp" event={"ID":"1f13d789-0b3a-4c60-85f5-0e7d02610526","Type":"ContainerDied","Data":"7be1fa3121c60eae26e8c9e21677a186290bcc764126bc70d9e3882ba874ac9b"} Jan 22 10:47:42 crc kubenswrapper[4752]: I0122 10:47:42.031911 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7be1fa3121c60eae26e8c9e21677a186290bcc764126bc70d9e3882ba874ac9b" Jan 22 10:47:42 crc kubenswrapper[4752]: I0122 10:47:42.031920 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ftcrp" Jan 22 10:47:42 crc kubenswrapper[4752]: I0122 10:47:42.298302 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 10:47:42 crc kubenswrapper[4752]: I0122 10:47:42.301907 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5d0f435e-1113-41cd-9154-fc1db045abe1" containerName="nova-api-api" containerID="cri-o://c47dc9ec7a98b230bca7d4de9a9df4ddba077d5ef13f50efa77d281b78a5e8af" gracePeriod=30 Jan 22 10:47:42 crc kubenswrapper[4752]: I0122 10:47:42.302218 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5d0f435e-1113-41cd-9154-fc1db045abe1" containerName="nova-api-log" containerID="cri-o://95efb1892a250da53fe5961ce4185b7f54a8a0de3c328b6f0249f1b4b4742817" gracePeriod=30 Jan 22 10:47:42 crc kubenswrapper[4752]: I0122 10:47:42.325987 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 10:47:42 crc kubenswrapper[4752]: I0122 10:47:42.326248 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d08b87c3-3701-4177-b444-ec69e10c7ae1" containerName="nova-scheduler-scheduler" containerID="cri-o://0d16ddbe20da4ac7dacb16a2105fe52a16fd8a4c6b4cccf80ee8e8296869b651" gracePeriod=30 Jan 22 10:47:42 crc kubenswrapper[4752]: I0122 10:47:42.376307 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:47:42 crc kubenswrapper[4752]: I0122 10:47:42.376588 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b07628b5-105b-4de5-a644-f6a37786572f" containerName="nova-metadata-log" containerID="cri-o://ed8b014d0917b641eb8214fd8206ccc83226ee8735a122f1890067a49ca147ae" gracePeriod=30 Jan 22 10:47:42 crc kubenswrapper[4752]: I0122 10:47:42.377126 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b07628b5-105b-4de5-a644-f6a37786572f" containerName="nova-metadata-metadata" containerID="cri-o://6da35232cb5846de1a886eab007d17baa977a281f531dd6cb9dec1d70114817f" gracePeriod=30 Jan 22 10:47:43 crc kubenswrapper[4752]: I0122 10:47:43.052483 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71b9081b-e283-49c6-8566-ff644b67f461","Type":"ContainerStarted","Data":"f0aa62d7906ad96d28d8a2b0c22a75df613125af977d68f1e6854b34a8749f84"} Jan 22 10:47:43 crc kubenswrapper[4752]: I0122 10:47:43.052573 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 10:47:43 crc kubenswrapper[4752]: I0122 10:47:43.058611 4752 generic.go:334] "Generic (PLEG): container finished" podID="5d0f435e-1113-41cd-9154-fc1db045abe1" containerID="95efb1892a250da53fe5961ce4185b7f54a8a0de3c328b6f0249f1b4b4742817" exitCode=143 Jan 22 10:47:43 crc kubenswrapper[4752]: I0122 10:47:43.058689 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5d0f435e-1113-41cd-9154-fc1db045abe1","Type":"ContainerDied","Data":"95efb1892a250da53fe5961ce4185b7f54a8a0de3c328b6f0249f1b4b4742817"} Jan 22 10:47:43 crc kubenswrapper[4752]: I0122 10:47:43.061260 4752 generic.go:334] "Generic (PLEG): container finished" podID="b07628b5-105b-4de5-a644-f6a37786572f" containerID="ed8b014d0917b641eb8214fd8206ccc83226ee8735a122f1890067a49ca147ae" exitCode=143 Jan 22 10:47:43 crc kubenswrapper[4752]: I0122 10:47:43.061309 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b07628b5-105b-4de5-a644-f6a37786572f","Type":"ContainerDied","Data":"ed8b014d0917b641eb8214fd8206ccc83226ee8735a122f1890067a49ca147ae"} Jan 22 10:47:43 crc kubenswrapper[4752]: I0122 10:47:43.093846 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.047515139 podStartE2EDuration="9.093830292s" podCreationTimestamp="2026-01-22 10:47:34 +0000 UTC" firstStartedPulling="2026-01-22 10:47:35.802531856 +0000 UTC m=+1335.032474774" lastFinishedPulling="2026-01-22 10:47:41.848847019 +0000 UTC m=+1341.078789927" observedRunningTime="2026-01-22 10:47:43.088948027 +0000 UTC m=+1342.318890965" watchObservedRunningTime="2026-01-22 10:47:43.093830292 +0000 UTC m=+1342.323773200" Jan 22 10:47:43 crc kubenswrapper[4752]: E0122 10:47:43.837012 4752 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0d16ddbe20da4ac7dacb16a2105fe52a16fd8a4c6b4cccf80ee8e8296869b651 is running failed: container process not found" containerID="0d16ddbe20da4ac7dacb16a2105fe52a16fd8a4c6b4cccf80ee8e8296869b651" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 10:47:43 crc kubenswrapper[4752]: E0122 10:47:43.840807 4752 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0d16ddbe20da4ac7dacb16a2105fe52a16fd8a4c6b4cccf80ee8e8296869b651 is running failed: container process not found" containerID="0d16ddbe20da4ac7dacb16a2105fe52a16fd8a4c6b4cccf80ee8e8296869b651" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 10:47:43 crc kubenswrapper[4752]: E0122 10:47:43.850017 4752 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0d16ddbe20da4ac7dacb16a2105fe52a16fd8a4c6b4cccf80ee8e8296869b651 is running failed: container process not found" containerID="0d16ddbe20da4ac7dacb16a2105fe52a16fd8a4c6b4cccf80ee8e8296869b651" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 10:47:43 crc kubenswrapper[4752]: E0122 10:47:43.850103 4752 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0d16ddbe20da4ac7dacb16a2105fe52a16fd8a4c6b4cccf80ee8e8296869b651 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="d08b87c3-3701-4177-b444-ec69e10c7ae1" containerName="nova-scheduler-scheduler" Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.007802 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b07628b5-105b-4de5-a644-f6a37786572f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.218:8775/\": read tcp 10.217.0.2:41506->10.217.0.218:8775: read: connection reset by peer" Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.008143 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b07628b5-105b-4de5-a644-f6a37786572f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.218:8775/\": read tcp 10.217.0.2:41494->10.217.0.218:8775: read: connection reset by peer" Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.071489 4752 generic.go:334] "Generic (PLEG): container finished" podID="5d0f435e-1113-41cd-9154-fc1db045abe1" containerID="c47dc9ec7a98b230bca7d4de9a9df4ddba077d5ef13f50efa77d281b78a5e8af" exitCode=0 Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.071567 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5d0f435e-1113-41cd-9154-fc1db045abe1","Type":"ContainerDied","Data":"c47dc9ec7a98b230bca7d4de9a9df4ddba077d5ef13f50efa77d281b78a5e8af"} Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.072760 4752 generic.go:334] "Generic (PLEG): container finished" podID="d08b87c3-3701-4177-b444-ec69e10c7ae1" containerID="0d16ddbe20da4ac7dacb16a2105fe52a16fd8a4c6b4cccf80ee8e8296869b651" exitCode=0 Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.072925 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d08b87c3-3701-4177-b444-ec69e10c7ae1","Type":"ContainerDied","Data":"0d16ddbe20da4ac7dacb16a2105fe52a16fd8a4c6b4cccf80ee8e8296869b651"} Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.072971 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d08b87c3-3701-4177-b444-ec69e10c7ae1","Type":"ContainerDied","Data":"3e5cccb722dec6ce97f87b190ef807f817818500ad62b2578990945ca6bf364d"} Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.072982 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e5cccb722dec6ce97f87b190ef807f817818500ad62b2578990945ca6bf364d" Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.292216 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.405347 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.410729 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d08b87c3-3701-4177-b444-ec69e10c7ae1-config-data\") pod \"d08b87c3-3701-4177-b444-ec69e10c7ae1\" (UID: \"d08b87c3-3701-4177-b444-ec69e10c7ae1\") " Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.411173 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d08b87c3-3701-4177-b444-ec69e10c7ae1-combined-ca-bundle\") pod \"d08b87c3-3701-4177-b444-ec69e10c7ae1\" (UID: \"d08b87c3-3701-4177-b444-ec69e10c7ae1\") " Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.411579 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bs8x\" (UniqueName: \"kubernetes.io/projected/d08b87c3-3701-4177-b444-ec69e10c7ae1-kube-api-access-9bs8x\") pod \"d08b87c3-3701-4177-b444-ec69e10c7ae1\" (UID: \"d08b87c3-3701-4177-b444-ec69e10c7ae1\") " Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.429918 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d08b87c3-3701-4177-b444-ec69e10c7ae1-kube-api-access-9bs8x" (OuterVolumeSpecName: "kube-api-access-9bs8x") pod "d08b87c3-3701-4177-b444-ec69e10c7ae1" (UID: "d08b87c3-3701-4177-b444-ec69e10c7ae1"). InnerVolumeSpecName "kube-api-access-9bs8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.482699 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d08b87c3-3701-4177-b444-ec69e10c7ae1-config-data" (OuterVolumeSpecName: "config-data") pod "d08b87c3-3701-4177-b444-ec69e10c7ae1" (UID: "d08b87c3-3701-4177-b444-ec69e10c7ae1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.508987 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d08b87c3-3701-4177-b444-ec69e10c7ae1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d08b87c3-3701-4177-b444-ec69e10c7ae1" (UID: "d08b87c3-3701-4177-b444-ec69e10c7ae1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.515069 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.526323 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d0f435e-1113-41cd-9154-fc1db045abe1-config-data\") pod \"5d0f435e-1113-41cd-9154-fc1db045abe1\" (UID: \"5d0f435e-1113-41cd-9154-fc1db045abe1\") " Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.526421 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d0f435e-1113-41cd-9154-fc1db045abe1-combined-ca-bundle\") pod \"5d0f435e-1113-41cd-9154-fc1db045abe1\" (UID: \"5d0f435e-1113-41cd-9154-fc1db045abe1\") " Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.526453 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d0f435e-1113-41cd-9154-fc1db045abe1-internal-tls-certs\") pod \"5d0f435e-1113-41cd-9154-fc1db045abe1\" (UID: \"5d0f435e-1113-41cd-9154-fc1db045abe1\") " Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.526490 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmtl4\" (UniqueName: \"kubernetes.io/projected/5d0f435e-1113-41cd-9154-fc1db045abe1-kube-api-access-wmtl4\") pod \"5d0f435e-1113-41cd-9154-fc1db045abe1\" (UID: \"5d0f435e-1113-41cd-9154-fc1db045abe1\") " Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.526520 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d0f435e-1113-41cd-9154-fc1db045abe1-logs\") pod \"5d0f435e-1113-41cd-9154-fc1db045abe1\" (UID: \"5d0f435e-1113-41cd-9154-fc1db045abe1\") " Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.526562 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d0f435e-1113-41cd-9154-fc1db045abe1-public-tls-certs\") pod \"5d0f435e-1113-41cd-9154-fc1db045abe1\" (UID: \"5d0f435e-1113-41cd-9154-fc1db045abe1\") " Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.527015 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d08b87c3-3701-4177-b444-ec69e10c7ae1-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.527032 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d08b87c3-3701-4177-b444-ec69e10c7ae1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.527043 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bs8x\" (UniqueName: \"kubernetes.io/projected/d08b87c3-3701-4177-b444-ec69e10c7ae1-kube-api-access-9bs8x\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.527777 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d0f435e-1113-41cd-9154-fc1db045abe1-logs" (OuterVolumeSpecName: "logs") pod "5d0f435e-1113-41cd-9154-fc1db045abe1" (UID: "5d0f435e-1113-41cd-9154-fc1db045abe1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.531035 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d0f435e-1113-41cd-9154-fc1db045abe1-kube-api-access-wmtl4" (OuterVolumeSpecName: "kube-api-access-wmtl4") pod "5d0f435e-1113-41cd-9154-fc1db045abe1" (UID: "5d0f435e-1113-41cd-9154-fc1db045abe1"). InnerVolumeSpecName "kube-api-access-wmtl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.573006 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d0f435e-1113-41cd-9154-fc1db045abe1-config-data" (OuterVolumeSpecName: "config-data") pod "5d0f435e-1113-41cd-9154-fc1db045abe1" (UID: "5d0f435e-1113-41cd-9154-fc1db045abe1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.591043 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d0f435e-1113-41cd-9154-fc1db045abe1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d0f435e-1113-41cd-9154-fc1db045abe1" (UID: "5d0f435e-1113-41cd-9154-fc1db045abe1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.591144 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d0f435e-1113-41cd-9154-fc1db045abe1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5d0f435e-1113-41cd-9154-fc1db045abe1" (UID: "5d0f435e-1113-41cd-9154-fc1db045abe1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.621283 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d0f435e-1113-41cd-9154-fc1db045abe1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5d0f435e-1113-41cd-9154-fc1db045abe1" (UID: "5d0f435e-1113-41cd-9154-fc1db045abe1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.631467 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b07628b5-105b-4de5-a644-f6a37786572f-nova-metadata-tls-certs\") pod \"b07628b5-105b-4de5-a644-f6a37786572f\" (UID: \"b07628b5-105b-4de5-a644-f6a37786572f\") " Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.631516 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4v6c\" (UniqueName: \"kubernetes.io/projected/b07628b5-105b-4de5-a644-f6a37786572f-kube-api-access-b4v6c\") pod \"b07628b5-105b-4de5-a644-f6a37786572f\" (UID: \"b07628b5-105b-4de5-a644-f6a37786572f\") " Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.631636 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b07628b5-105b-4de5-a644-f6a37786572f-combined-ca-bundle\") pod \"b07628b5-105b-4de5-a644-f6a37786572f\" (UID: \"b07628b5-105b-4de5-a644-f6a37786572f\") " Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.631777 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b07628b5-105b-4de5-a644-f6a37786572f-config-data\") pod \"b07628b5-105b-4de5-a644-f6a37786572f\" (UID: \"b07628b5-105b-4de5-a644-f6a37786572f\") " Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.631801 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b07628b5-105b-4de5-a644-f6a37786572f-logs\") pod \"b07628b5-105b-4de5-a644-f6a37786572f\" (UID: \"b07628b5-105b-4de5-a644-f6a37786572f\") " Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.632246 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b07628b5-105b-4de5-a644-f6a37786572f-logs" (OuterVolumeSpecName: "logs") pod "b07628b5-105b-4de5-a644-f6a37786572f" (UID: "b07628b5-105b-4de5-a644-f6a37786572f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.632865 4752 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b07628b5-105b-4de5-a644-f6a37786572f-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.632903 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d0f435e-1113-41cd-9154-fc1db045abe1-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.632917 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d0f435e-1113-41cd-9154-fc1db045abe1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.632927 4752 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d0f435e-1113-41cd-9154-fc1db045abe1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.632936 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmtl4\" (UniqueName: \"kubernetes.io/projected/5d0f435e-1113-41cd-9154-fc1db045abe1-kube-api-access-wmtl4\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.632945 4752 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d0f435e-1113-41cd-9154-fc1db045abe1-logs\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.632953 4752 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d0f435e-1113-41cd-9154-fc1db045abe1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.637894 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b07628b5-105b-4de5-a644-f6a37786572f-kube-api-access-b4v6c" (OuterVolumeSpecName: "kube-api-access-b4v6c") pod "b07628b5-105b-4de5-a644-f6a37786572f" (UID: "b07628b5-105b-4de5-a644-f6a37786572f"). InnerVolumeSpecName "kube-api-access-b4v6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.660963 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b07628b5-105b-4de5-a644-f6a37786572f-config-data" (OuterVolumeSpecName: "config-data") pod "b07628b5-105b-4de5-a644-f6a37786572f" (UID: "b07628b5-105b-4de5-a644-f6a37786572f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.661019 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b07628b5-105b-4de5-a644-f6a37786572f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b07628b5-105b-4de5-a644-f6a37786572f" (UID: "b07628b5-105b-4de5-a644-f6a37786572f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.680821 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b07628b5-105b-4de5-a644-f6a37786572f-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "b07628b5-105b-4de5-a644-f6a37786572f" (UID: "b07628b5-105b-4de5-a644-f6a37786572f"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.735113 4752 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b07628b5-105b-4de5-a644-f6a37786572f-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.735153 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4v6c\" (UniqueName: \"kubernetes.io/projected/b07628b5-105b-4de5-a644-f6a37786572f-kube-api-access-b4v6c\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.735166 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b07628b5-105b-4de5-a644-f6a37786572f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:44 crc kubenswrapper[4752]: I0122 10:47:44.735177 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b07628b5-105b-4de5-a644-f6a37786572f-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.084620 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.084641 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5d0f435e-1113-41cd-9154-fc1db045abe1","Type":"ContainerDied","Data":"33f5c4b8db06e804bf15d45cb4eb3fd06a75e01a8d4a3f1075a861d5ae3a4ee0"} Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.084825 4752 scope.go:117] "RemoveContainer" containerID="c47dc9ec7a98b230bca7d4de9a9df4ddba077d5ef13f50efa77d281b78a5e8af" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.088895 4752 generic.go:334] "Generic (PLEG): container finished" podID="b07628b5-105b-4de5-a644-f6a37786572f" containerID="6da35232cb5846de1a886eab007d17baa977a281f531dd6cb9dec1d70114817f" exitCode=0 Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.088977 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.089017 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.095648 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b07628b5-105b-4de5-a644-f6a37786572f","Type":"ContainerDied","Data":"6da35232cb5846de1a886eab007d17baa977a281f531dd6cb9dec1d70114817f"} Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.095710 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b07628b5-105b-4de5-a644-f6a37786572f","Type":"ContainerDied","Data":"dd1b5c769f6f9bb100bdffc0eb52ffce2f8f7b621bed5991ead41b6ff703ef17"} Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.114574 4752 scope.go:117] "RemoveContainer" containerID="95efb1892a250da53fe5961ce4185b7f54a8a0de3c328b6f0249f1b4b4742817" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.133299 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.142355 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.149191 4752 scope.go:117] "RemoveContainer" containerID="6da35232cb5846de1a886eab007d17baa977a281f531dd6cb9dec1d70114817f" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.175084 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.195595 4752 scope.go:117] "RemoveContainer" containerID="ed8b014d0917b641eb8214fd8206ccc83226ee8735a122f1890067a49ca147ae" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.233067 4752 scope.go:117] "RemoveContainer" containerID="6da35232cb5846de1a886eab007d17baa977a281f531dd6cb9dec1d70114817f" Jan 22 10:47:45 crc kubenswrapper[4752]: E0122 10:47:45.234182 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6da35232cb5846de1a886eab007d17baa977a281f531dd6cb9dec1d70114817f\": container with ID starting with 6da35232cb5846de1a886eab007d17baa977a281f531dd6cb9dec1d70114817f not found: ID does not exist" containerID="6da35232cb5846de1a886eab007d17baa977a281f531dd6cb9dec1d70114817f" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.234242 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6da35232cb5846de1a886eab007d17baa977a281f531dd6cb9dec1d70114817f"} err="failed to get container status \"6da35232cb5846de1a886eab007d17baa977a281f531dd6cb9dec1d70114817f\": rpc error: code = NotFound desc = could not find container \"6da35232cb5846de1a886eab007d17baa977a281f531dd6cb9dec1d70114817f\": container with ID starting with 6da35232cb5846de1a886eab007d17baa977a281f531dd6cb9dec1d70114817f not found: ID does not exist" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.234276 4752 scope.go:117] "RemoveContainer" containerID="ed8b014d0917b641eb8214fd8206ccc83226ee8735a122f1890067a49ca147ae" Jan 22 10:47:45 crc kubenswrapper[4752]: E0122 10:47:45.234750 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed8b014d0917b641eb8214fd8206ccc83226ee8735a122f1890067a49ca147ae\": container with ID starting with ed8b014d0917b641eb8214fd8206ccc83226ee8735a122f1890067a49ca147ae not found: ID does not exist" containerID="ed8b014d0917b641eb8214fd8206ccc83226ee8735a122f1890067a49ca147ae" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.234782 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed8b014d0917b641eb8214fd8206ccc83226ee8735a122f1890067a49ca147ae"} err="failed to get container status \"ed8b014d0917b641eb8214fd8206ccc83226ee8735a122f1890067a49ca147ae\": rpc error: code = NotFound desc = could not find container \"ed8b014d0917b641eb8214fd8206ccc83226ee8735a122f1890067a49ca147ae\": container with ID starting with ed8b014d0917b641eb8214fd8206ccc83226ee8735a122f1890067a49ca147ae not found: ID does not exist" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.238732 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.250698 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 22 10:47:45 crc kubenswrapper[4752]: E0122 10:47:45.251225 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d0f435e-1113-41cd-9154-fc1db045abe1" containerName="nova-api-api" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.251242 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d0f435e-1113-41cd-9154-fc1db045abe1" containerName="nova-api-api" Jan 22 10:47:45 crc kubenswrapper[4752]: E0122 10:47:45.251266 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d08b87c3-3701-4177-b444-ec69e10c7ae1" containerName="nova-scheduler-scheduler" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.251273 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="d08b87c3-3701-4177-b444-ec69e10c7ae1" containerName="nova-scheduler-scheduler" Jan 22 10:47:45 crc kubenswrapper[4752]: E0122 10:47:45.251286 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b07628b5-105b-4de5-a644-f6a37786572f" containerName="nova-metadata-metadata" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.251293 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="b07628b5-105b-4de5-a644-f6a37786572f" containerName="nova-metadata-metadata" Jan 22 10:47:45 crc kubenswrapper[4752]: E0122 10:47:45.251303 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d0f435e-1113-41cd-9154-fc1db045abe1" containerName="nova-api-log" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.251311 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d0f435e-1113-41cd-9154-fc1db045abe1" containerName="nova-api-log" Jan 22 10:47:45 crc kubenswrapper[4752]: E0122 10:47:45.251326 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b07628b5-105b-4de5-a644-f6a37786572f" containerName="nova-metadata-log" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.251333 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="b07628b5-105b-4de5-a644-f6a37786572f" containerName="nova-metadata-log" Jan 22 10:47:45 crc kubenswrapper[4752]: E0122 10:47:45.251359 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f13d789-0b3a-4c60-85f5-0e7d02610526" containerName="nova-manage" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.251365 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f13d789-0b3a-4c60-85f5-0e7d02610526" containerName="nova-manage" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.251560 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="d08b87c3-3701-4177-b444-ec69e10c7ae1" containerName="nova-scheduler-scheduler" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.251578 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="b07628b5-105b-4de5-a644-f6a37786572f" containerName="nova-metadata-log" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.251587 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f13d789-0b3a-4c60-85f5-0e7d02610526" containerName="nova-manage" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.251602 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d0f435e-1113-41cd-9154-fc1db045abe1" containerName="nova-api-log" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.251619 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="b07628b5-105b-4de5-a644-f6a37786572f" containerName="nova-metadata-metadata" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.251629 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d0f435e-1113-41cd-9154-fc1db045abe1" containerName="nova-api-api" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.252902 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.257362 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.257641 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.257808 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.274377 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.278004 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.282528 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.282632 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.291963 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.317767 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.330474 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.339090 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.344224 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9d6d100-ab9a-4fbb-b136-368ae770bb7c-public-tls-certs\") pod \"nova-api-0\" (UID: \"e9d6d100-ab9a-4fbb-b136-368ae770bb7c\") " pod="openstack/nova-api-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.344300 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9d6d100-ab9a-4fbb-b136-368ae770bb7c-config-data\") pod \"nova-api-0\" (UID: \"e9d6d100-ab9a-4fbb-b136-368ae770bb7c\") " pod="openstack/nova-api-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.344360 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d9md\" (UniqueName: \"kubernetes.io/projected/e9d6d100-ab9a-4fbb-b136-368ae770bb7c-kube-api-access-8d9md\") pod \"nova-api-0\" (UID: \"e9d6d100-ab9a-4fbb-b136-368ae770bb7c\") " pod="openstack/nova-api-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.344382 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d6d100-ab9a-4fbb-b136-368ae770bb7c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e9d6d100-ab9a-4fbb-b136-368ae770bb7c\") " pod="openstack/nova-api-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.344397 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9d6d100-ab9a-4fbb-b136-368ae770bb7c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e9d6d100-ab9a-4fbb-b136-368ae770bb7c\") " pod="openstack/nova-api-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.344474 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9d6d100-ab9a-4fbb-b136-368ae770bb7c-logs\") pod \"nova-api-0\" (UID: \"e9d6d100-ab9a-4fbb-b136-368ae770bb7c\") " pod="openstack/nova-api-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.347314 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.349066 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.351225 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.360196 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.447919 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff6b3815-c6fc-49c3-885d-d393d6eb7f05-logs\") pod \"nova-metadata-0\" (UID: \"ff6b3815-c6fc-49c3-885d-d393d6eb7f05\") " pod="openstack/nova-metadata-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.448005 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx6fq\" (UniqueName: \"kubernetes.io/projected/2cf5bd3f-6077-4217-ba22-4e6ae6ed2eb5-kube-api-access-dx6fq\") pod \"nova-scheduler-0\" (UID: \"2cf5bd3f-6077-4217-ba22-4e6ae6ed2eb5\") " pod="openstack/nova-scheduler-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.448096 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cf5bd3f-6077-4217-ba22-4e6ae6ed2eb5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2cf5bd3f-6077-4217-ba22-4e6ae6ed2eb5\") " pod="openstack/nova-scheduler-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.448158 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9d6d100-ab9a-4fbb-b136-368ae770bb7c-logs\") pod \"nova-api-0\" (UID: \"e9d6d100-ab9a-4fbb-b136-368ae770bb7c\") " pod="openstack/nova-api-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.448238 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrtx8\" (UniqueName: \"kubernetes.io/projected/ff6b3815-c6fc-49c3-885d-d393d6eb7f05-kube-api-access-zrtx8\") pod \"nova-metadata-0\" (UID: \"ff6b3815-c6fc-49c3-885d-d393d6eb7f05\") " pod="openstack/nova-metadata-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.448388 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff6b3815-c6fc-49c3-885d-d393d6eb7f05-config-data\") pod \"nova-metadata-0\" (UID: \"ff6b3815-c6fc-49c3-885d-d393d6eb7f05\") " pod="openstack/nova-metadata-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.448415 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff6b3815-c6fc-49c3-885d-d393d6eb7f05-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ff6b3815-c6fc-49c3-885d-d393d6eb7f05\") " pod="openstack/nova-metadata-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.448474 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9d6d100-ab9a-4fbb-b136-368ae770bb7c-public-tls-certs\") pod \"nova-api-0\" (UID: \"e9d6d100-ab9a-4fbb-b136-368ae770bb7c\") " pod="openstack/nova-api-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.448511 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff6b3815-c6fc-49c3-885d-d393d6eb7f05-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ff6b3815-c6fc-49c3-885d-d393d6eb7f05\") " pod="openstack/nova-metadata-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.448551 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9d6d100-ab9a-4fbb-b136-368ae770bb7c-config-data\") pod \"nova-api-0\" (UID: \"e9d6d100-ab9a-4fbb-b136-368ae770bb7c\") " pod="openstack/nova-api-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.448604 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d9md\" (UniqueName: \"kubernetes.io/projected/e9d6d100-ab9a-4fbb-b136-368ae770bb7c-kube-api-access-8d9md\") pod \"nova-api-0\" (UID: \"e9d6d100-ab9a-4fbb-b136-368ae770bb7c\") " pod="openstack/nova-api-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.448635 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d6d100-ab9a-4fbb-b136-368ae770bb7c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e9d6d100-ab9a-4fbb-b136-368ae770bb7c\") " pod="openstack/nova-api-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.448655 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9d6d100-ab9a-4fbb-b136-368ae770bb7c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e9d6d100-ab9a-4fbb-b136-368ae770bb7c\") " pod="openstack/nova-api-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.448690 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9d6d100-ab9a-4fbb-b136-368ae770bb7c-logs\") pod \"nova-api-0\" (UID: \"e9d6d100-ab9a-4fbb-b136-368ae770bb7c\") " pod="openstack/nova-api-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.448717 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cf5bd3f-6077-4217-ba22-4e6ae6ed2eb5-config-data\") pod \"nova-scheduler-0\" (UID: \"2cf5bd3f-6077-4217-ba22-4e6ae6ed2eb5\") " pod="openstack/nova-scheduler-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.454689 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9d6d100-ab9a-4fbb-b136-368ae770bb7c-config-data\") pod \"nova-api-0\" (UID: \"e9d6d100-ab9a-4fbb-b136-368ae770bb7c\") " pod="openstack/nova-api-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.465647 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9d6d100-ab9a-4fbb-b136-368ae770bb7c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e9d6d100-ab9a-4fbb-b136-368ae770bb7c\") " pod="openstack/nova-api-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.467568 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9d6d100-ab9a-4fbb-b136-368ae770bb7c-public-tls-certs\") pod \"nova-api-0\" (UID: \"e9d6d100-ab9a-4fbb-b136-368ae770bb7c\") " pod="openstack/nova-api-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.475837 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d9md\" (UniqueName: \"kubernetes.io/projected/e9d6d100-ab9a-4fbb-b136-368ae770bb7c-kube-api-access-8d9md\") pod \"nova-api-0\" (UID: \"e9d6d100-ab9a-4fbb-b136-368ae770bb7c\") " pod="openstack/nova-api-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.476350 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d6d100-ab9a-4fbb-b136-368ae770bb7c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e9d6d100-ab9a-4fbb-b136-368ae770bb7c\") " pod="openstack/nova-api-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.550951 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff6b3815-c6fc-49c3-885d-d393d6eb7f05-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ff6b3815-c6fc-49c3-885d-d393d6eb7f05\") " pod="openstack/nova-metadata-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.551794 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cf5bd3f-6077-4217-ba22-4e6ae6ed2eb5-config-data\") pod \"nova-scheduler-0\" (UID: \"2cf5bd3f-6077-4217-ba22-4e6ae6ed2eb5\") " pod="openstack/nova-scheduler-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.552031 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff6b3815-c6fc-49c3-885d-d393d6eb7f05-logs\") pod \"nova-metadata-0\" (UID: \"ff6b3815-c6fc-49c3-885d-d393d6eb7f05\") " pod="openstack/nova-metadata-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.552219 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx6fq\" (UniqueName: \"kubernetes.io/projected/2cf5bd3f-6077-4217-ba22-4e6ae6ed2eb5-kube-api-access-dx6fq\") pod \"nova-scheduler-0\" (UID: \"2cf5bd3f-6077-4217-ba22-4e6ae6ed2eb5\") " pod="openstack/nova-scheduler-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.552422 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff6b3815-c6fc-49c3-885d-d393d6eb7f05-logs\") pod \"nova-metadata-0\" (UID: \"ff6b3815-c6fc-49c3-885d-d393d6eb7f05\") " pod="openstack/nova-metadata-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.552719 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cf5bd3f-6077-4217-ba22-4e6ae6ed2eb5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2cf5bd3f-6077-4217-ba22-4e6ae6ed2eb5\") " pod="openstack/nova-scheduler-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.552891 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrtx8\" (UniqueName: \"kubernetes.io/projected/ff6b3815-c6fc-49c3-885d-d393d6eb7f05-kube-api-access-zrtx8\") pod \"nova-metadata-0\" (UID: \"ff6b3815-c6fc-49c3-885d-d393d6eb7f05\") " pod="openstack/nova-metadata-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.553069 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff6b3815-c6fc-49c3-885d-d393d6eb7f05-config-data\") pod \"nova-metadata-0\" (UID: \"ff6b3815-c6fc-49c3-885d-d393d6eb7f05\") " pod="openstack/nova-metadata-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.553191 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff6b3815-c6fc-49c3-885d-d393d6eb7f05-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ff6b3815-c6fc-49c3-885d-d393d6eb7f05\") " pod="openstack/nova-metadata-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.554258 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff6b3815-c6fc-49c3-885d-d393d6eb7f05-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ff6b3815-c6fc-49c3-885d-d393d6eb7f05\") " pod="openstack/nova-metadata-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.555459 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cf5bd3f-6077-4217-ba22-4e6ae6ed2eb5-config-data\") pod \"nova-scheduler-0\" (UID: \"2cf5bd3f-6077-4217-ba22-4e6ae6ed2eb5\") " pod="openstack/nova-scheduler-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.556985 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff6b3815-c6fc-49c3-885d-d393d6eb7f05-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ff6b3815-c6fc-49c3-885d-d393d6eb7f05\") " pod="openstack/nova-metadata-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.559087 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cf5bd3f-6077-4217-ba22-4e6ae6ed2eb5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2cf5bd3f-6077-4217-ba22-4e6ae6ed2eb5\") " pod="openstack/nova-scheduler-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.566910 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff6b3815-c6fc-49c3-885d-d393d6eb7f05-config-data\") pod \"nova-metadata-0\" (UID: \"ff6b3815-c6fc-49c3-885d-d393d6eb7f05\") " pod="openstack/nova-metadata-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.570915 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx6fq\" (UniqueName: \"kubernetes.io/projected/2cf5bd3f-6077-4217-ba22-4e6ae6ed2eb5-kube-api-access-dx6fq\") pod \"nova-scheduler-0\" (UID: \"2cf5bd3f-6077-4217-ba22-4e6ae6ed2eb5\") " pod="openstack/nova-scheduler-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.571628 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrtx8\" (UniqueName: \"kubernetes.io/projected/ff6b3815-c6fc-49c3-885d-d393d6eb7f05-kube-api-access-zrtx8\") pod \"nova-metadata-0\" (UID: \"ff6b3815-c6fc-49c3-885d-d393d6eb7f05\") " pod="openstack/nova-metadata-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.587580 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.599469 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 10:47:45 crc kubenswrapper[4752]: I0122 10:47:45.669176 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 10:47:46 crc kubenswrapper[4752]: I0122 10:47:46.112208 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 10:47:46 crc kubenswrapper[4752]: I0122 10:47:46.233626 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 10:47:46 crc kubenswrapper[4752]: I0122 10:47:46.244447 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 10:47:46 crc kubenswrapper[4752]: W0122 10:47:46.249081 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cf5bd3f_6077_4217_ba22_4e6ae6ed2eb5.slice/crio-6030abca4e4be2263665f9fa9e16161f6eedfe42159f065aa88a83111de5b58b WatchSource:0}: Error finding container 6030abca4e4be2263665f9fa9e16161f6eedfe42159f065aa88a83111de5b58b: Status 404 returned error can't find the container with id 6030abca4e4be2263665f9fa9e16161f6eedfe42159f065aa88a83111de5b58b Jan 22 10:47:47 crc kubenswrapper[4752]: I0122 10:47:47.110890 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d0f435e-1113-41cd-9154-fc1db045abe1" path="/var/lib/kubelet/pods/5d0f435e-1113-41cd-9154-fc1db045abe1/volumes" Jan 22 10:47:47 crc kubenswrapper[4752]: I0122 10:47:47.112091 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b07628b5-105b-4de5-a644-f6a37786572f" path="/var/lib/kubelet/pods/b07628b5-105b-4de5-a644-f6a37786572f/volumes" Jan 22 10:47:47 crc kubenswrapper[4752]: I0122 10:47:47.112635 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d08b87c3-3701-4177-b444-ec69e10c7ae1" path="/var/lib/kubelet/pods/d08b87c3-3701-4177-b444-ec69e10c7ae1/volumes" Jan 22 10:47:47 crc kubenswrapper[4752]: I0122 10:47:47.125629 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff6b3815-c6fc-49c3-885d-d393d6eb7f05","Type":"ContainerStarted","Data":"d6f537012b12b78b52d29772e39f09b71163b9b301dcc0d732b1f68a63d25d93"} Jan 22 10:47:47 crc kubenswrapper[4752]: I0122 10:47:47.125680 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff6b3815-c6fc-49c3-885d-d393d6eb7f05","Type":"ContainerStarted","Data":"e32e7bd5ce2b81570c994ae489db7054f889bb0ccf88bdfd27849abe47fc52da"} Jan 22 10:47:47 crc kubenswrapper[4752]: I0122 10:47:47.125693 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff6b3815-c6fc-49c3-885d-d393d6eb7f05","Type":"ContainerStarted","Data":"6a9ff56f6ce98c1af11f799eaf8f08a77dabe70201239851697771a5e14fad08"} Jan 22 10:47:47 crc kubenswrapper[4752]: I0122 10:47:47.129960 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2cf5bd3f-6077-4217-ba22-4e6ae6ed2eb5","Type":"ContainerStarted","Data":"6da5e6947aebba4115058f9357b896a262e6d1a12e340a79c5f73603ef368a32"} Jan 22 10:47:47 crc kubenswrapper[4752]: I0122 10:47:47.130004 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2cf5bd3f-6077-4217-ba22-4e6ae6ed2eb5","Type":"ContainerStarted","Data":"6030abca4e4be2263665f9fa9e16161f6eedfe42159f065aa88a83111de5b58b"} Jan 22 10:47:47 crc kubenswrapper[4752]: I0122 10:47:47.131555 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e9d6d100-ab9a-4fbb-b136-368ae770bb7c","Type":"ContainerStarted","Data":"b62c676bcbd5cf7b465e155c0dfef847e9219bb91f520e5c8d8d15e954819d51"} Jan 22 10:47:47 crc kubenswrapper[4752]: I0122 10:47:47.131584 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e9d6d100-ab9a-4fbb-b136-368ae770bb7c","Type":"ContainerStarted","Data":"1760c2a468884593f5e403540314e95366aa7ed5ab2c6bfcf649240f211b0f65"} Jan 22 10:47:47 crc kubenswrapper[4752]: I0122 10:47:47.131598 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e9d6d100-ab9a-4fbb-b136-368ae770bb7c","Type":"ContainerStarted","Data":"26280c780c15cb6462f884c6647b6db8d862bdad595dae3c99de13da2eacbc17"} Jan 22 10:47:47 crc kubenswrapper[4752]: I0122 10:47:47.149967 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.1498488 podStartE2EDuration="2.1498488s" podCreationTimestamp="2026-01-22 10:47:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:47:47.145796486 +0000 UTC m=+1346.375739414" watchObservedRunningTime="2026-01-22 10:47:47.1498488 +0000 UTC m=+1346.379791708" Jan 22 10:47:47 crc kubenswrapper[4752]: I0122 10:47:47.169973 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.169951935 podStartE2EDuration="2.169951935s" podCreationTimestamp="2026-01-22 10:47:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:47:47.162233397 +0000 UTC m=+1346.392176305" watchObservedRunningTime="2026-01-22 10:47:47.169951935 +0000 UTC m=+1346.399894853" Jan 22 10:47:47 crc kubenswrapper[4752]: I0122 10:47:47.190914 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.1908964109999998 podStartE2EDuration="2.190896411s" podCreationTimestamp="2026-01-22 10:47:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:47:47.17639326 +0000 UTC m=+1346.406336178" watchObservedRunningTime="2026-01-22 10:47:47.190896411 +0000 UTC m=+1346.420839319" Jan 22 10:47:50 crc kubenswrapper[4752]: I0122 10:47:50.600068 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 10:47:50 crc kubenswrapper[4752]: I0122 10:47:50.600338 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 10:47:50 crc kubenswrapper[4752]: I0122 10:47:50.670635 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 22 10:47:55 crc kubenswrapper[4752]: I0122 10:47:55.587765 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 10:47:55 crc kubenswrapper[4752]: I0122 10:47:55.588295 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 10:47:55 crc kubenswrapper[4752]: I0122 10:47:55.601132 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 22 10:47:55 crc kubenswrapper[4752]: I0122 10:47:55.601178 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 22 10:47:55 crc kubenswrapper[4752]: I0122 10:47:55.670615 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 22 10:47:55 crc kubenswrapper[4752]: I0122 10:47:55.720993 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 22 10:47:56 crc kubenswrapper[4752]: I0122 10:47:56.257182 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 22 10:47:56 crc kubenswrapper[4752]: I0122 10:47:56.600067 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e9d6d100-ab9a-4fbb-b136-368ae770bb7c" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.228:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 10:47:56 crc kubenswrapper[4752]: I0122 10:47:56.600080 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e9d6d100-ab9a-4fbb-b136-368ae770bb7c" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.228:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 10:47:56 crc kubenswrapper[4752]: I0122 10:47:56.614179 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ff6b3815-c6fc-49c3-885d-d393d6eb7f05" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.229:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 10:47:56 crc kubenswrapper[4752]: I0122 10:47:56.614196 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ff6b3815-c6fc-49c3-885d-d393d6eb7f05" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.229:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 10:48:05 crc kubenswrapper[4752]: I0122 10:48:05.344650 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 22 10:48:05 crc kubenswrapper[4752]: I0122 10:48:05.596465 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 22 10:48:05 crc kubenswrapper[4752]: I0122 10:48:05.597531 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 22 10:48:05 crc kubenswrapper[4752]: I0122 10:48:05.605736 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 22 10:48:05 crc kubenswrapper[4752]: I0122 10:48:05.610846 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 22 10:48:05 crc kubenswrapper[4752]: I0122 10:48:05.612415 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 22 10:48:05 crc kubenswrapper[4752]: I0122 10:48:05.617360 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 22 10:48:05 crc kubenswrapper[4752]: I0122 10:48:05.624483 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 22 10:48:06 crc kubenswrapper[4752]: I0122 10:48:06.359306 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 22 10:48:06 crc kubenswrapper[4752]: I0122 10:48:06.453220 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 22 10:48:06 crc kubenswrapper[4752]: I0122 10:48:06.465132 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 22 10:48:15 crc kubenswrapper[4752]: I0122 10:48:15.940917 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 10:48:17 crc kubenswrapper[4752]: I0122 10:48:17.166086 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 10:48:19 crc kubenswrapper[4752]: I0122 10:48:19.808741 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52" containerName="rabbitmq" containerID="cri-o://1b5ba6af172ae5f9bedb82b945f78a0e42a7b55eb9705883940015026f42522c" gracePeriod=604797 Jan 22 10:48:20 crc kubenswrapper[4752]: I0122 10:48:20.900388 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="9356406a-3c6e-4af1-a8bb-92244286ba39" containerName="rabbitmq" containerID="cri-o://d0999d11a8ab5b45c1bedd6f8b324decb051608cf2bac2b53cde37e431c0559a" gracePeriod=604797 Jan 22 10:48:21 crc kubenswrapper[4752]: I0122 10:48:21.203029 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dphbx"] Jan 22 10:48:21 crc kubenswrapper[4752]: I0122 10:48:21.205631 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dphbx" Jan 22 10:48:21 crc kubenswrapper[4752]: I0122 10:48:21.218074 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dphbx"] Jan 22 10:48:21 crc kubenswrapper[4752]: I0122 10:48:21.359653 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj76t\" (UniqueName: \"kubernetes.io/projected/c849f4e5-68ce-41d2-9dd3-5766b881f54c-kube-api-access-pj76t\") pod \"redhat-operators-dphbx\" (UID: \"c849f4e5-68ce-41d2-9dd3-5766b881f54c\") " pod="openshift-marketplace/redhat-operators-dphbx" Jan 22 10:48:21 crc kubenswrapper[4752]: I0122 10:48:21.360357 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c849f4e5-68ce-41d2-9dd3-5766b881f54c-catalog-content\") pod \"redhat-operators-dphbx\" (UID: \"c849f4e5-68ce-41d2-9dd3-5766b881f54c\") " pod="openshift-marketplace/redhat-operators-dphbx" Jan 22 10:48:21 crc kubenswrapper[4752]: I0122 10:48:21.360520 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c849f4e5-68ce-41d2-9dd3-5766b881f54c-utilities\") pod \"redhat-operators-dphbx\" (UID: \"c849f4e5-68ce-41d2-9dd3-5766b881f54c\") " pod="openshift-marketplace/redhat-operators-dphbx" Jan 22 10:48:21 crc kubenswrapper[4752]: I0122 10:48:21.462776 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c849f4e5-68ce-41d2-9dd3-5766b881f54c-catalog-content\") pod \"redhat-operators-dphbx\" (UID: \"c849f4e5-68ce-41d2-9dd3-5766b881f54c\") " pod="openshift-marketplace/redhat-operators-dphbx" Jan 22 10:48:21 crc kubenswrapper[4752]: I0122 10:48:21.462854 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c849f4e5-68ce-41d2-9dd3-5766b881f54c-utilities\") pod \"redhat-operators-dphbx\" (UID: \"c849f4e5-68ce-41d2-9dd3-5766b881f54c\") " pod="openshift-marketplace/redhat-operators-dphbx" Jan 22 10:48:21 crc kubenswrapper[4752]: I0122 10:48:21.463037 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj76t\" (UniqueName: \"kubernetes.io/projected/c849f4e5-68ce-41d2-9dd3-5766b881f54c-kube-api-access-pj76t\") pod \"redhat-operators-dphbx\" (UID: \"c849f4e5-68ce-41d2-9dd3-5766b881f54c\") " pod="openshift-marketplace/redhat-operators-dphbx" Jan 22 10:48:21 crc kubenswrapper[4752]: I0122 10:48:21.463619 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c849f4e5-68ce-41d2-9dd3-5766b881f54c-utilities\") pod \"redhat-operators-dphbx\" (UID: \"c849f4e5-68ce-41d2-9dd3-5766b881f54c\") " pod="openshift-marketplace/redhat-operators-dphbx" Jan 22 10:48:21 crc kubenswrapper[4752]: I0122 10:48:21.463628 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c849f4e5-68ce-41d2-9dd3-5766b881f54c-catalog-content\") pod \"redhat-operators-dphbx\" (UID: \"c849f4e5-68ce-41d2-9dd3-5766b881f54c\") " pod="openshift-marketplace/redhat-operators-dphbx" Jan 22 10:48:21 crc kubenswrapper[4752]: I0122 10:48:21.491345 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj76t\" (UniqueName: \"kubernetes.io/projected/c849f4e5-68ce-41d2-9dd3-5766b881f54c-kube-api-access-pj76t\") pod \"redhat-operators-dphbx\" (UID: \"c849f4e5-68ce-41d2-9dd3-5766b881f54c\") " pod="openshift-marketplace/redhat-operators-dphbx" Jan 22 10:48:21 crc kubenswrapper[4752]: I0122 10:48:21.560324 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dphbx" Jan 22 10:48:21 crc kubenswrapper[4752]: I0122 10:48:21.570495 4752 generic.go:334] "Generic (PLEG): container finished" podID="86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52" containerID="1b5ba6af172ae5f9bedb82b945f78a0e42a7b55eb9705883940015026f42522c" exitCode=0 Jan 22 10:48:21 crc kubenswrapper[4752]: I0122 10:48:21.570651 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52","Type":"ContainerDied","Data":"1b5ba6af172ae5f9bedb82b945f78a0e42a7b55eb9705883940015026f42522c"} Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.009302 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.191607 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-plugins-conf\") pod \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.193888 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-rabbitmq-erlang-cookie\") pod \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.193941 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-config-data\") pod \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.193988 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.194037 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhrj5\" (UniqueName: \"kubernetes.io/projected/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-kube-api-access-mhrj5\") pod \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.194056 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-rabbitmq-plugins\") pod \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " Jan 22 10:48:22 crc kubenswrapper[4752]: W0122 10:48:22.193591 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc849f4e5_68ce_41d2_9dd3_5766b881f54c.slice/crio-f18c9490e4a7822e8bc19f4578d423a1744c8f9ef4435b058104294543501d8e WatchSource:0}: Error finding container f18c9490e4a7822e8bc19f4578d423a1744c8f9ef4435b058104294543501d8e: Status 404 returned error can't find the container with id f18c9490e4a7822e8bc19f4578d423a1744c8f9ef4435b058104294543501d8e Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.191915 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dphbx"] Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.194119 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-erlang-cookie-secret\") pod \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.194183 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-server-conf\") pod \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.194327 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-rabbitmq-confd\") pod \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.194350 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-rabbitmq-tls\") pod \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.194396 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-pod-info\") pod \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\" (UID: \"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52\") " Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.194478 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52" (UID: "86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.194897 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52" (UID: "86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.195241 4752 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.195262 4752 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.195727 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52" (UID: "86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.204093 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-kube-api-access-mhrj5" (OuterVolumeSpecName: "kube-api-access-mhrj5") pod "86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52" (UID: "86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52"). InnerVolumeSpecName "kube-api-access-mhrj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.206181 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52" (UID: "86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.218498 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-pod-info" (OuterVolumeSpecName: "pod-info") pod "86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52" (UID: "86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.238021 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52" (UID: "86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.240491 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52" (UID: "86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.290336 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-server-conf" (OuterVolumeSpecName: "server-conf") pod "86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52" (UID: "86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.295241 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-config-data" (OuterVolumeSpecName: "config-data") pod "86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52" (UID: "86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.296526 4752 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-server-conf\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.296547 4752 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.296557 4752 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-pod-info\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.296566 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.296584 4752 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.296595 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhrj5\" (UniqueName: \"kubernetes.io/projected/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-kube-api-access-mhrj5\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.296603 4752 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.296611 4752 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.339372 4752 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.400236 4752 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.407529 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52" (UID: "86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.502215 4752 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.605248 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.606290 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52","Type":"ContainerDied","Data":"50da313b5fe2d90ac284aafd0acb71bbfe925125d7399a5390444641f52d9ab0"} Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.606371 4752 scope.go:117] "RemoveContainer" containerID="1b5ba6af172ae5f9bedb82b945f78a0e42a7b55eb9705883940015026f42522c" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.643147 4752 generic.go:334] "Generic (PLEG): container finished" podID="9356406a-3c6e-4af1-a8bb-92244286ba39" containerID="d0999d11a8ab5b45c1bedd6f8b324decb051608cf2bac2b53cde37e431c0559a" exitCode=0 Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.643237 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9356406a-3c6e-4af1-a8bb-92244286ba39","Type":"ContainerDied","Data":"d0999d11a8ab5b45c1bedd6f8b324decb051608cf2bac2b53cde37e431c0559a"} Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.677291 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dphbx" event={"ID":"c849f4e5-68ce-41d2-9dd3-5766b881f54c","Type":"ContainerStarted","Data":"f18c9490e4a7822e8bc19f4578d423a1744c8f9ef4435b058104294543501d8e"} Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.690470 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.703839 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.767922 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 10:48:22 crc kubenswrapper[4752]: E0122 10:48:22.768610 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52" containerName="rabbitmq" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.768692 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52" containerName="rabbitmq" Jan 22 10:48:22 crc kubenswrapper[4752]: E0122 10:48:22.768763 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52" containerName="setup-container" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.768818 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52" containerName="setup-container" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.769067 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52" containerName="rabbitmq" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.770173 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.782255 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.782471 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.782582 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.782709 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.782888 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.783004 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.783187 4752 scope.go:117] "RemoveContainer" containerID="b552ddb1cbe37b0846222181a70e2b5fabda4c2e97937ec8510628d7daab0110" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.795072 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-bshlx" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.804398 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.932597 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7a6ccec4-3c1a-4de7-9c41-017ff51001e7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7a6ccec4-3c1a-4de7-9c41-017ff51001e7\") " pod="openstack/rabbitmq-server-0" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.932681 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh7f8\" (UniqueName: \"kubernetes.io/projected/7a6ccec4-3c1a-4de7-9c41-017ff51001e7-kube-api-access-lh7f8\") pod \"rabbitmq-server-0\" (UID: \"7a6ccec4-3c1a-4de7-9c41-017ff51001e7\") " pod="openstack/rabbitmq-server-0" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.932726 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7a6ccec4-3c1a-4de7-9c41-017ff51001e7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7a6ccec4-3c1a-4de7-9c41-017ff51001e7\") " pod="openstack/rabbitmq-server-0" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.932789 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7a6ccec4-3c1a-4de7-9c41-017ff51001e7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7a6ccec4-3c1a-4de7-9c41-017ff51001e7\") " pod="openstack/rabbitmq-server-0" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.932812 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7a6ccec4-3c1a-4de7-9c41-017ff51001e7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7a6ccec4-3c1a-4de7-9c41-017ff51001e7\") " pod="openstack/rabbitmq-server-0" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.932872 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"7a6ccec4-3c1a-4de7-9c41-017ff51001e7\") " pod="openstack/rabbitmq-server-0" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.932901 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7a6ccec4-3c1a-4de7-9c41-017ff51001e7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7a6ccec4-3c1a-4de7-9c41-017ff51001e7\") " pod="openstack/rabbitmq-server-0" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.932931 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7a6ccec4-3c1a-4de7-9c41-017ff51001e7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7a6ccec4-3c1a-4de7-9c41-017ff51001e7\") " pod="openstack/rabbitmq-server-0" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.933000 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7a6ccec4-3c1a-4de7-9c41-017ff51001e7-config-data\") pod \"rabbitmq-server-0\" (UID: \"7a6ccec4-3c1a-4de7-9c41-017ff51001e7\") " pod="openstack/rabbitmq-server-0" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.933046 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7a6ccec4-3c1a-4de7-9c41-017ff51001e7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7a6ccec4-3c1a-4de7-9c41-017ff51001e7\") " pod="openstack/rabbitmq-server-0" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.933078 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7a6ccec4-3c1a-4de7-9c41-017ff51001e7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7a6ccec4-3c1a-4de7-9c41-017ff51001e7\") " pod="openstack/rabbitmq-server-0" Jan 22 10:48:22 crc kubenswrapper[4752]: I0122 10:48:22.943224 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.034728 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9356406a-3c6e-4af1-a8bb-92244286ba39-plugins-conf\") pod \"9356406a-3c6e-4af1-a8bb-92244286ba39\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.034813 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9356406a-3c6e-4af1-a8bb-92244286ba39-rabbitmq-confd\") pod \"9356406a-3c6e-4af1-a8bb-92244286ba39\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.034838 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9356406a-3c6e-4af1-a8bb-92244286ba39-erlang-cookie-secret\") pod \"9356406a-3c6e-4af1-a8bb-92244286ba39\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.034867 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9356406a-3c6e-4af1-a8bb-92244286ba39-rabbitmq-tls\") pod \"9356406a-3c6e-4af1-a8bb-92244286ba39\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.034894 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9356406a-3c6e-4af1-a8bb-92244286ba39-config-data\") pod \"9356406a-3c6e-4af1-a8bb-92244286ba39\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.034955 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9356406a-3c6e-4af1-a8bb-92244286ba39-rabbitmq-erlang-cookie\") pod \"9356406a-3c6e-4af1-a8bb-92244286ba39\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.035003 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9356406a-3c6e-4af1-a8bb-92244286ba39-pod-info\") pod \"9356406a-3c6e-4af1-a8bb-92244286ba39\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.035064 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4lbl\" (UniqueName: \"kubernetes.io/projected/9356406a-3c6e-4af1-a8bb-92244286ba39-kube-api-access-s4lbl\") pod \"9356406a-3c6e-4af1-a8bb-92244286ba39\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.035084 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9356406a-3c6e-4af1-a8bb-92244286ba39-rabbitmq-plugins\") pod \"9356406a-3c6e-4af1-a8bb-92244286ba39\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.035107 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9356406a-3c6e-4af1-a8bb-92244286ba39-server-conf\") pod \"9356406a-3c6e-4af1-a8bb-92244286ba39\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.035142 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"9356406a-3c6e-4af1-a8bb-92244286ba39\" (UID: \"9356406a-3c6e-4af1-a8bb-92244286ba39\") " Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.035382 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh7f8\" (UniqueName: \"kubernetes.io/projected/7a6ccec4-3c1a-4de7-9c41-017ff51001e7-kube-api-access-lh7f8\") pod \"rabbitmq-server-0\" (UID: \"7a6ccec4-3c1a-4de7-9c41-017ff51001e7\") " pod="openstack/rabbitmq-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.035422 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7a6ccec4-3c1a-4de7-9c41-017ff51001e7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7a6ccec4-3c1a-4de7-9c41-017ff51001e7\") " pod="openstack/rabbitmq-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.035474 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7a6ccec4-3c1a-4de7-9c41-017ff51001e7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7a6ccec4-3c1a-4de7-9c41-017ff51001e7\") " pod="openstack/rabbitmq-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.035499 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7a6ccec4-3c1a-4de7-9c41-017ff51001e7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7a6ccec4-3c1a-4de7-9c41-017ff51001e7\") " pod="openstack/rabbitmq-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.035530 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"7a6ccec4-3c1a-4de7-9c41-017ff51001e7\") " pod="openstack/rabbitmq-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.035546 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7a6ccec4-3c1a-4de7-9c41-017ff51001e7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7a6ccec4-3c1a-4de7-9c41-017ff51001e7\") " pod="openstack/rabbitmq-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.035564 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7a6ccec4-3c1a-4de7-9c41-017ff51001e7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7a6ccec4-3c1a-4de7-9c41-017ff51001e7\") " pod="openstack/rabbitmq-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.035614 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7a6ccec4-3c1a-4de7-9c41-017ff51001e7-config-data\") pod \"rabbitmq-server-0\" (UID: \"7a6ccec4-3c1a-4de7-9c41-017ff51001e7\") " pod="openstack/rabbitmq-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.035650 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7a6ccec4-3c1a-4de7-9c41-017ff51001e7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7a6ccec4-3c1a-4de7-9c41-017ff51001e7\") " pod="openstack/rabbitmq-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.035669 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7a6ccec4-3c1a-4de7-9c41-017ff51001e7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7a6ccec4-3c1a-4de7-9c41-017ff51001e7\") " pod="openstack/rabbitmq-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.035718 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7a6ccec4-3c1a-4de7-9c41-017ff51001e7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7a6ccec4-3c1a-4de7-9c41-017ff51001e7\") " pod="openstack/rabbitmq-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.036205 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7a6ccec4-3c1a-4de7-9c41-017ff51001e7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7a6ccec4-3c1a-4de7-9c41-017ff51001e7\") " pod="openstack/rabbitmq-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.039441 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7a6ccec4-3c1a-4de7-9c41-017ff51001e7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7a6ccec4-3c1a-4de7-9c41-017ff51001e7\") " pod="openstack/rabbitmq-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.040070 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7a6ccec4-3c1a-4de7-9c41-017ff51001e7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7a6ccec4-3c1a-4de7-9c41-017ff51001e7\") " pod="openstack/rabbitmq-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.041188 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9356406a-3c6e-4af1-a8bb-92244286ba39-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9356406a-3c6e-4af1-a8bb-92244286ba39" (UID: "9356406a-3c6e-4af1-a8bb-92244286ba39"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.041436 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9356406a-3c6e-4af1-a8bb-92244286ba39-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9356406a-3c6e-4af1-a8bb-92244286ba39" (UID: "9356406a-3c6e-4af1-a8bb-92244286ba39"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.041769 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7a6ccec4-3c1a-4de7-9c41-017ff51001e7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7a6ccec4-3c1a-4de7-9c41-017ff51001e7\") " pod="openstack/rabbitmq-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.041959 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7a6ccec4-3c1a-4de7-9c41-017ff51001e7-config-data\") pod \"rabbitmq-server-0\" (UID: \"7a6ccec4-3c1a-4de7-9c41-017ff51001e7\") " pod="openstack/rabbitmq-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.042440 4752 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"7a6ccec4-3c1a-4de7-9c41-017ff51001e7\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.043476 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9356406a-3c6e-4af1-a8bb-92244286ba39-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9356406a-3c6e-4af1-a8bb-92244286ba39" (UID: "9356406a-3c6e-4af1-a8bb-92244286ba39"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.062360 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7a6ccec4-3c1a-4de7-9c41-017ff51001e7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7a6ccec4-3c1a-4de7-9c41-017ff51001e7\") " pod="openstack/rabbitmq-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.062720 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9356406a-3c6e-4af1-a8bb-92244286ba39-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9356406a-3c6e-4af1-a8bb-92244286ba39" (UID: "9356406a-3c6e-4af1-a8bb-92244286ba39"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.062924 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9356406a-3c6e-4af1-a8bb-92244286ba39-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9356406a-3c6e-4af1-a8bb-92244286ba39" (UID: "9356406a-3c6e-4af1-a8bb-92244286ba39"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.066518 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7a6ccec4-3c1a-4de7-9c41-017ff51001e7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7a6ccec4-3c1a-4de7-9c41-017ff51001e7\") " pod="openstack/rabbitmq-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.066538 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh7f8\" (UniqueName: \"kubernetes.io/projected/7a6ccec4-3c1a-4de7-9c41-017ff51001e7-kube-api-access-lh7f8\") pod \"rabbitmq-server-0\" (UID: \"7a6ccec4-3c1a-4de7-9c41-017ff51001e7\") " pod="openstack/rabbitmq-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.090788 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7a6ccec4-3c1a-4de7-9c41-017ff51001e7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7a6ccec4-3c1a-4de7-9c41-017ff51001e7\") " pod="openstack/rabbitmq-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.097221 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9356406a-3c6e-4af1-a8bb-92244286ba39-pod-info" (OuterVolumeSpecName: "pod-info") pod "9356406a-3c6e-4af1-a8bb-92244286ba39" (UID: "9356406a-3c6e-4af1-a8bb-92244286ba39"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.099128 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "9356406a-3c6e-4af1-a8bb-92244286ba39" (UID: "9356406a-3c6e-4af1-a8bb-92244286ba39"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.106588 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7a6ccec4-3c1a-4de7-9c41-017ff51001e7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7a6ccec4-3c1a-4de7-9c41-017ff51001e7\") " pod="openstack/rabbitmq-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.111004 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9356406a-3c6e-4af1-a8bb-92244286ba39-kube-api-access-s4lbl" (OuterVolumeSpecName: "kube-api-access-s4lbl") pod "9356406a-3c6e-4af1-a8bb-92244286ba39" (UID: "9356406a-3c6e-4af1-a8bb-92244286ba39"). InnerVolumeSpecName "kube-api-access-s4lbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.112420 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9356406a-3c6e-4af1-a8bb-92244286ba39-config-data" (OuterVolumeSpecName: "config-data") pod "9356406a-3c6e-4af1-a8bb-92244286ba39" (UID: "9356406a-3c6e-4af1-a8bb-92244286ba39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.138156 4752 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9356406a-3c6e-4af1-a8bb-92244286ba39-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.138186 4752 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9356406a-3c6e-4af1-a8bb-92244286ba39-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.138197 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9356406a-3c6e-4af1-a8bb-92244286ba39-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.138207 4752 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9356406a-3c6e-4af1-a8bb-92244286ba39-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.138217 4752 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9356406a-3c6e-4af1-a8bb-92244286ba39-pod-info\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.138225 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4lbl\" (UniqueName: \"kubernetes.io/projected/9356406a-3c6e-4af1-a8bb-92244286ba39-kube-api-access-s4lbl\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.138234 4752 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9356406a-3c6e-4af1-a8bb-92244286ba39-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.138262 4752 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.138270 4752 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9356406a-3c6e-4af1-a8bb-92244286ba39-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.138582 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52" path="/var/lib/kubelet/pods/86753ef2-9ef7-4cd4-b4bd-f1a2f74f2a52/volumes" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.160483 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9356406a-3c6e-4af1-a8bb-92244286ba39-server-conf" (OuterVolumeSpecName: "server-conf") pod "9356406a-3c6e-4af1-a8bb-92244286ba39" (UID: "9356406a-3c6e-4af1-a8bb-92244286ba39"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.164197 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"7a6ccec4-3c1a-4de7-9c41-017ff51001e7\") " pod="openstack/rabbitmq-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.168120 4752 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.185452 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.240621 4752 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9356406a-3c6e-4af1-a8bb-92244286ba39-server-conf\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.240670 4752 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.335009 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9356406a-3c6e-4af1-a8bb-92244286ba39-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9356406a-3c6e-4af1-a8bb-92244286ba39" (UID: "9356406a-3c6e-4af1-a8bb-92244286ba39"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.342288 4752 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9356406a-3c6e-4af1-a8bb-92244286ba39-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.660553 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.689390 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9356406a-3c6e-4af1-a8bb-92244286ba39","Type":"ContainerDied","Data":"d2eef2287c5b84ff6974cfdc64a02d085dbabf948bb083197efb64c7d54d1538"} Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.689436 4752 scope.go:117] "RemoveContainer" containerID="d0999d11a8ab5b45c1bedd6f8b324decb051608cf2bac2b53cde37e431c0559a" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.689517 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.697201 4752 generic.go:334] "Generic (PLEG): container finished" podID="c849f4e5-68ce-41d2-9dd3-5766b881f54c" containerID="ea8423903d189185c622ba5c9affd28b940f17acd96b703d5c2fef637cc0778d" exitCode=0 Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.697474 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dphbx" event={"ID":"c849f4e5-68ce-41d2-9dd3-5766b881f54c","Type":"ContainerDied","Data":"ea8423903d189185c622ba5c9affd28b940f17acd96b703d5c2fef637cc0778d"} Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.703157 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7a6ccec4-3c1a-4de7-9c41-017ff51001e7","Type":"ContainerStarted","Data":"8a68411fb21ed48114e5fb8e97b10fd054961a475df3f42e667a4bd7b2a81f46"} Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.738290 4752 scope.go:117] "RemoveContainer" containerID="142644851c495509c91062f475336d914e85547dba46c81d6a1978054f164e2a" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.766619 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.779973 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.803066 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 10:48:23 crc kubenswrapper[4752]: E0122 10:48:23.803621 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9356406a-3c6e-4af1-a8bb-92244286ba39" containerName="rabbitmq" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.803643 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="9356406a-3c6e-4af1-a8bb-92244286ba39" containerName="rabbitmq" Jan 22 10:48:23 crc kubenswrapper[4752]: E0122 10:48:23.803683 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9356406a-3c6e-4af1-a8bb-92244286ba39" containerName="setup-container" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.803697 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="9356406a-3c6e-4af1-a8bb-92244286ba39" containerName="setup-container" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.803961 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="9356406a-3c6e-4af1-a8bb-92244286ba39" containerName="rabbitmq" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.805827 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.808490 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.809536 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.809964 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.810156 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.811271 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.811525 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-qwj9f" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.815769 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.827984 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.855734 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/af510fcf-f737-4857-86f6-f1d486c5298a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"af510fcf-f737-4857-86f6-f1d486c5298a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.855787 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnkpc\" (UniqueName: \"kubernetes.io/projected/af510fcf-f737-4857-86f6-f1d486c5298a-kube-api-access-xnkpc\") pod \"rabbitmq-cell1-server-0\" (UID: \"af510fcf-f737-4857-86f6-f1d486c5298a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.855847 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/af510fcf-f737-4857-86f6-f1d486c5298a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"af510fcf-f737-4857-86f6-f1d486c5298a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.855881 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/af510fcf-f737-4857-86f6-f1d486c5298a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"af510fcf-f737-4857-86f6-f1d486c5298a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.855902 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/af510fcf-f737-4857-86f6-f1d486c5298a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"af510fcf-f737-4857-86f6-f1d486c5298a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.855920 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/af510fcf-f737-4857-86f6-f1d486c5298a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"af510fcf-f737-4857-86f6-f1d486c5298a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.855949 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/af510fcf-f737-4857-86f6-f1d486c5298a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"af510fcf-f737-4857-86f6-f1d486c5298a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.856000 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/af510fcf-f737-4857-86f6-f1d486c5298a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"af510fcf-f737-4857-86f6-f1d486c5298a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.856038 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/af510fcf-f737-4857-86f6-f1d486c5298a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"af510fcf-f737-4857-86f6-f1d486c5298a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.856055 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af510fcf-f737-4857-86f6-f1d486c5298a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"af510fcf-f737-4857-86f6-f1d486c5298a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.856080 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"af510fcf-f737-4857-86f6-f1d486c5298a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.957832 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/af510fcf-f737-4857-86f6-f1d486c5298a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"af510fcf-f737-4857-86f6-f1d486c5298a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.957903 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/af510fcf-f737-4857-86f6-f1d486c5298a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"af510fcf-f737-4857-86f6-f1d486c5298a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.957932 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/af510fcf-f737-4857-86f6-f1d486c5298a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"af510fcf-f737-4857-86f6-f1d486c5298a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.957977 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/af510fcf-f737-4857-86f6-f1d486c5298a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"af510fcf-f737-4857-86f6-f1d486c5298a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.958052 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/af510fcf-f737-4857-86f6-f1d486c5298a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"af510fcf-f737-4857-86f6-f1d486c5298a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.958109 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/af510fcf-f737-4857-86f6-f1d486c5298a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"af510fcf-f737-4857-86f6-f1d486c5298a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.958133 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af510fcf-f737-4857-86f6-f1d486c5298a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"af510fcf-f737-4857-86f6-f1d486c5298a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.958164 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"af510fcf-f737-4857-86f6-f1d486c5298a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.958212 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/af510fcf-f737-4857-86f6-f1d486c5298a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"af510fcf-f737-4857-86f6-f1d486c5298a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.958255 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnkpc\" (UniqueName: \"kubernetes.io/projected/af510fcf-f737-4857-86f6-f1d486c5298a-kube-api-access-xnkpc\") pod \"rabbitmq-cell1-server-0\" (UID: \"af510fcf-f737-4857-86f6-f1d486c5298a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.958300 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/af510fcf-f737-4857-86f6-f1d486c5298a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"af510fcf-f737-4857-86f6-f1d486c5298a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.958957 4752 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"af510fcf-f737-4857-86f6-f1d486c5298a\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.959057 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/af510fcf-f737-4857-86f6-f1d486c5298a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"af510fcf-f737-4857-86f6-f1d486c5298a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.958986 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/af510fcf-f737-4857-86f6-f1d486c5298a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"af510fcf-f737-4857-86f6-f1d486c5298a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.959528 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af510fcf-f737-4857-86f6-f1d486c5298a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"af510fcf-f737-4857-86f6-f1d486c5298a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.959938 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/af510fcf-f737-4857-86f6-f1d486c5298a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"af510fcf-f737-4857-86f6-f1d486c5298a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.960550 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/af510fcf-f737-4857-86f6-f1d486c5298a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"af510fcf-f737-4857-86f6-f1d486c5298a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.962948 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/af510fcf-f737-4857-86f6-f1d486c5298a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"af510fcf-f737-4857-86f6-f1d486c5298a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.963535 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/af510fcf-f737-4857-86f6-f1d486c5298a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"af510fcf-f737-4857-86f6-f1d486c5298a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.964158 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/af510fcf-f737-4857-86f6-f1d486c5298a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"af510fcf-f737-4857-86f6-f1d486c5298a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.966607 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/af510fcf-f737-4857-86f6-f1d486c5298a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"af510fcf-f737-4857-86f6-f1d486c5298a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.976624 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnkpc\" (UniqueName: \"kubernetes.io/projected/af510fcf-f737-4857-86f6-f1d486c5298a-kube-api-access-xnkpc\") pod \"rabbitmq-cell1-server-0\" (UID: \"af510fcf-f737-4857-86f6-f1d486c5298a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:23 crc kubenswrapper[4752]: I0122 10:48:23.995762 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"af510fcf-f737-4857-86f6-f1d486c5298a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:24 crc kubenswrapper[4752]: I0122 10:48:24.154769 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:48:24 crc kubenswrapper[4752]: I0122 10:48:24.681907 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 10:48:24 crc kubenswrapper[4752]: I0122 10:48:24.720579 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"af510fcf-f737-4857-86f6-f1d486c5298a","Type":"ContainerStarted","Data":"f449b2543d480fc587d0cb7d964e6702c04f97a5d40644e0c87b9c7377388d70"} Jan 22 10:48:25 crc kubenswrapper[4752]: I0122 10:48:25.108847 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9356406a-3c6e-4af1-a8bb-92244286ba39" path="/var/lib/kubelet/pods/9356406a-3c6e-4af1-a8bb-92244286ba39/volumes" Jan 22 10:48:25 crc kubenswrapper[4752]: I0122 10:48:25.736194 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dphbx" event={"ID":"c849f4e5-68ce-41d2-9dd3-5766b881f54c","Type":"ContainerStarted","Data":"ed2ea99ea6dfa2f582a86df43d001ed9ed3bef13bd744fc6beb17f95888f09a5"} Jan 22 10:48:27 crc kubenswrapper[4752]: I0122 10:48:27.612604 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7a6ccec4-3c1a-4de7-9c41-017ff51001e7","Type":"ContainerStarted","Data":"89c06cc1417a9ff9593613dfd2a678c7a080966c5b89a720764c5c4784b51339"} Jan 22 10:48:28 crc kubenswrapper[4752]: I0122 10:48:28.657215 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"af510fcf-f737-4857-86f6-f1d486c5298a","Type":"ContainerStarted","Data":"41d940a7c13d91c743f24e019c214f2782a9f64154c33696964c8cae72b49cc3"} Jan 22 10:48:31 crc kubenswrapper[4752]: I0122 10:48:31.687237 4752 generic.go:334] "Generic (PLEG): container finished" podID="c849f4e5-68ce-41d2-9dd3-5766b881f54c" containerID="ed2ea99ea6dfa2f582a86df43d001ed9ed3bef13bd744fc6beb17f95888f09a5" exitCode=0 Jan 22 10:48:31 crc kubenswrapper[4752]: I0122 10:48:31.687326 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dphbx" event={"ID":"c849f4e5-68ce-41d2-9dd3-5766b881f54c","Type":"ContainerDied","Data":"ed2ea99ea6dfa2f582a86df43d001ed9ed3bef13bd744fc6beb17f95888f09a5"} Jan 22 10:48:31 crc kubenswrapper[4752]: I0122 10:48:31.871645 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cd67b459-rntwn"] Jan 22 10:48:31 crc kubenswrapper[4752]: I0122 10:48:31.873664 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd67b459-rntwn" Jan 22 10:48:31 crc kubenswrapper[4752]: I0122 10:48:31.881199 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 22 10:48:31 crc kubenswrapper[4752]: I0122 10:48:31.897831 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cd67b459-rntwn"] Jan 22 10:48:32 crc kubenswrapper[4752]: I0122 10:48:32.034906 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x57t\" (UniqueName: \"kubernetes.io/projected/35a18247-289a-4df5-b580-9219c3eb828d-kube-api-access-9x57t\") pod \"dnsmasq-dns-5cd67b459-rntwn\" (UID: \"35a18247-289a-4df5-b580-9219c3eb828d\") " pod="openstack/dnsmasq-dns-5cd67b459-rntwn" Jan 22 10:48:32 crc kubenswrapper[4752]: I0122 10:48:32.034962 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-dns-swift-storage-0\") pod \"dnsmasq-dns-5cd67b459-rntwn\" (UID: \"35a18247-289a-4df5-b580-9219c3eb828d\") " pod="openstack/dnsmasq-dns-5cd67b459-rntwn" Jan 22 10:48:32 crc kubenswrapper[4752]: I0122 10:48:32.035005 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-ovsdbserver-nb\") pod \"dnsmasq-dns-5cd67b459-rntwn\" (UID: \"35a18247-289a-4df5-b580-9219c3eb828d\") " pod="openstack/dnsmasq-dns-5cd67b459-rntwn" Jan 22 10:48:32 crc kubenswrapper[4752]: I0122 10:48:32.035044 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-openstack-edpm-ipam\") pod \"dnsmasq-dns-5cd67b459-rntwn\" (UID: \"35a18247-289a-4df5-b580-9219c3eb828d\") " pod="openstack/dnsmasq-dns-5cd67b459-rntwn" Jan 22 10:48:32 crc kubenswrapper[4752]: I0122 10:48:32.035063 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-dns-svc\") pod \"dnsmasq-dns-5cd67b459-rntwn\" (UID: \"35a18247-289a-4df5-b580-9219c3eb828d\") " pod="openstack/dnsmasq-dns-5cd67b459-rntwn" Jan 22 10:48:32 crc kubenswrapper[4752]: I0122 10:48:32.035109 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-ovsdbserver-sb\") pod \"dnsmasq-dns-5cd67b459-rntwn\" (UID: \"35a18247-289a-4df5-b580-9219c3eb828d\") " pod="openstack/dnsmasq-dns-5cd67b459-rntwn" Jan 22 10:48:32 crc kubenswrapper[4752]: I0122 10:48:32.035131 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-config\") pod \"dnsmasq-dns-5cd67b459-rntwn\" (UID: \"35a18247-289a-4df5-b580-9219c3eb828d\") " pod="openstack/dnsmasq-dns-5cd67b459-rntwn" Jan 22 10:48:32 crc kubenswrapper[4752]: I0122 10:48:32.139284 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x57t\" (UniqueName: \"kubernetes.io/projected/35a18247-289a-4df5-b580-9219c3eb828d-kube-api-access-9x57t\") pod \"dnsmasq-dns-5cd67b459-rntwn\" (UID: \"35a18247-289a-4df5-b580-9219c3eb828d\") " pod="openstack/dnsmasq-dns-5cd67b459-rntwn" Jan 22 10:48:32 crc kubenswrapper[4752]: I0122 10:48:32.139349 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-dns-swift-storage-0\") pod \"dnsmasq-dns-5cd67b459-rntwn\" (UID: \"35a18247-289a-4df5-b580-9219c3eb828d\") " pod="openstack/dnsmasq-dns-5cd67b459-rntwn" Jan 22 10:48:32 crc kubenswrapper[4752]: I0122 10:48:32.139415 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-ovsdbserver-nb\") pod \"dnsmasq-dns-5cd67b459-rntwn\" (UID: \"35a18247-289a-4df5-b580-9219c3eb828d\") " pod="openstack/dnsmasq-dns-5cd67b459-rntwn" Jan 22 10:48:32 crc kubenswrapper[4752]: I0122 10:48:32.139480 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-openstack-edpm-ipam\") pod \"dnsmasq-dns-5cd67b459-rntwn\" (UID: \"35a18247-289a-4df5-b580-9219c3eb828d\") " pod="openstack/dnsmasq-dns-5cd67b459-rntwn" Jan 22 10:48:32 crc kubenswrapper[4752]: I0122 10:48:32.139506 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-dns-svc\") pod \"dnsmasq-dns-5cd67b459-rntwn\" (UID: \"35a18247-289a-4df5-b580-9219c3eb828d\") " pod="openstack/dnsmasq-dns-5cd67b459-rntwn" Jan 22 10:48:32 crc kubenswrapper[4752]: I0122 10:48:32.139581 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-ovsdbserver-sb\") pod \"dnsmasq-dns-5cd67b459-rntwn\" (UID: \"35a18247-289a-4df5-b580-9219c3eb828d\") " pod="openstack/dnsmasq-dns-5cd67b459-rntwn" Jan 22 10:48:32 crc kubenswrapper[4752]: I0122 10:48:32.139612 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-config\") pod \"dnsmasq-dns-5cd67b459-rntwn\" (UID: \"35a18247-289a-4df5-b580-9219c3eb828d\") " pod="openstack/dnsmasq-dns-5cd67b459-rntwn" Jan 22 10:48:32 crc kubenswrapper[4752]: I0122 10:48:32.140823 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-openstack-edpm-ipam\") pod \"dnsmasq-dns-5cd67b459-rntwn\" (UID: \"35a18247-289a-4df5-b580-9219c3eb828d\") " pod="openstack/dnsmasq-dns-5cd67b459-rntwn" Jan 22 10:48:32 crc kubenswrapper[4752]: I0122 10:48:32.141144 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-config\") pod \"dnsmasq-dns-5cd67b459-rntwn\" (UID: \"35a18247-289a-4df5-b580-9219c3eb828d\") " pod="openstack/dnsmasq-dns-5cd67b459-rntwn" Jan 22 10:48:32 crc kubenswrapper[4752]: I0122 10:48:32.141217 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-ovsdbserver-nb\") pod \"dnsmasq-dns-5cd67b459-rntwn\" (UID: \"35a18247-289a-4df5-b580-9219c3eb828d\") " pod="openstack/dnsmasq-dns-5cd67b459-rntwn" Jan 22 10:48:32 crc kubenswrapper[4752]: I0122 10:48:32.142399 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-dns-swift-storage-0\") pod \"dnsmasq-dns-5cd67b459-rntwn\" (UID: \"35a18247-289a-4df5-b580-9219c3eb828d\") " pod="openstack/dnsmasq-dns-5cd67b459-rntwn" Jan 22 10:48:32 crc kubenswrapper[4752]: I0122 10:48:32.143044 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-ovsdbserver-sb\") pod \"dnsmasq-dns-5cd67b459-rntwn\" (UID: \"35a18247-289a-4df5-b580-9219c3eb828d\") " pod="openstack/dnsmasq-dns-5cd67b459-rntwn" Jan 22 10:48:32 crc kubenswrapper[4752]: I0122 10:48:32.143982 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-dns-svc\") pod \"dnsmasq-dns-5cd67b459-rntwn\" (UID: \"35a18247-289a-4df5-b580-9219c3eb828d\") " pod="openstack/dnsmasq-dns-5cd67b459-rntwn" Jan 22 10:48:32 crc kubenswrapper[4752]: I0122 10:48:32.161469 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x57t\" (UniqueName: \"kubernetes.io/projected/35a18247-289a-4df5-b580-9219c3eb828d-kube-api-access-9x57t\") pod \"dnsmasq-dns-5cd67b459-rntwn\" (UID: \"35a18247-289a-4df5-b580-9219c3eb828d\") " pod="openstack/dnsmasq-dns-5cd67b459-rntwn" Jan 22 10:48:32 crc kubenswrapper[4752]: I0122 10:48:32.217648 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd67b459-rntwn" Jan 22 10:48:32 crc kubenswrapper[4752]: I0122 10:48:32.728574 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dphbx" event={"ID":"c849f4e5-68ce-41d2-9dd3-5766b881f54c","Type":"ContainerStarted","Data":"bdc6ddb7f5939c3901f3fd34b2f72e0f85357de9d0b7e5d10719a431dac505ac"} Jan 22 10:48:32 crc kubenswrapper[4752]: I0122 10:48:32.766005 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dphbx" podStartSLOduration=3.309740419 podStartE2EDuration="11.764982971s" podCreationTimestamp="2026-01-22 10:48:21 +0000 UTC" firstStartedPulling="2026-01-22 10:48:23.706282183 +0000 UTC m=+1382.936225091" lastFinishedPulling="2026-01-22 10:48:32.161524735 +0000 UTC m=+1391.391467643" observedRunningTime="2026-01-22 10:48:32.749444396 +0000 UTC m=+1391.979387304" watchObservedRunningTime="2026-01-22 10:48:32.764982971 +0000 UTC m=+1391.994925879" Jan 22 10:48:33 crc kubenswrapper[4752]: I0122 10:48:33.239455 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cd67b459-rntwn"] Jan 22 10:48:33 crc kubenswrapper[4752]: I0122 10:48:33.739674 4752 generic.go:334] "Generic (PLEG): container finished" podID="35a18247-289a-4df5-b580-9219c3eb828d" containerID="7af52a11d41443853341d0241d6a55262671c0e450e8faa54182fb619132d756" exitCode=0 Jan 22 10:48:33 crc kubenswrapper[4752]: I0122 10:48:33.739810 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cd67b459-rntwn" event={"ID":"35a18247-289a-4df5-b580-9219c3eb828d","Type":"ContainerDied","Data":"7af52a11d41443853341d0241d6a55262671c0e450e8faa54182fb619132d756"} Jan 22 10:48:33 crc kubenswrapper[4752]: I0122 10:48:33.739915 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cd67b459-rntwn" event={"ID":"35a18247-289a-4df5-b580-9219c3eb828d","Type":"ContainerStarted","Data":"d2d33f70ff9fdf6a5e02d39f478c8debb21f0e5ec66ac02ccc1cc5999e5a49e6"} Jan 22 10:48:34 crc kubenswrapper[4752]: I0122 10:48:34.749037 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cd67b459-rntwn" event={"ID":"35a18247-289a-4df5-b580-9219c3eb828d","Type":"ContainerStarted","Data":"e0b95e82f11bc76aedace619b7cc5f85555e64693bd1d0e6ec74b2e0e6e7e0d5"} Jan 22 10:48:34 crc kubenswrapper[4752]: I0122 10:48:34.749306 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5cd67b459-rntwn" Jan 22 10:48:34 crc kubenswrapper[4752]: I0122 10:48:34.791795 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5cd67b459-rntwn" podStartSLOduration=3.791775102 podStartE2EDuration="3.791775102s" podCreationTimestamp="2026-01-22 10:48:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:48:34.781350101 +0000 UTC m=+1394.011293029" watchObservedRunningTime="2026-01-22 10:48:34.791775102 +0000 UTC m=+1394.021718020" Jan 22 10:48:41 crc kubenswrapper[4752]: I0122 10:48:41.560641 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dphbx" Jan 22 10:48:41 crc kubenswrapper[4752]: I0122 10:48:41.561324 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dphbx" Jan 22 10:48:41 crc kubenswrapper[4752]: I0122 10:48:41.618650 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dphbx" Jan 22 10:48:41 crc kubenswrapper[4752]: I0122 10:48:41.880439 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dphbx" Jan 22 10:48:41 crc kubenswrapper[4752]: I0122 10:48:41.947244 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dphbx"] Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.219093 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5cd67b459-rntwn" Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.313008 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfbd48f-4mxp4"] Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.313310 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fcfbd48f-4mxp4" podUID="dd937dca-75fa-4ec8-b570-7e8d3a749654" containerName="dnsmasq-dns" containerID="cri-o://21e442c192ac915b8fdd6f9fc0e7c935fe04f0df6ef7df185c0258c7f8234413" gracePeriod=10 Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.576902 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d7878c9d7-tscvz"] Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.579066 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d7878c9d7-tscvz" Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.597923 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d7878c9d7-tscvz"] Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.651882 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0cb62786-3029-4703-ae34-53fdfd5161d0-dns-swift-storage-0\") pod \"dnsmasq-dns-d7878c9d7-tscvz\" (UID: \"0cb62786-3029-4703-ae34-53fdfd5161d0\") " pod="openstack/dnsmasq-dns-d7878c9d7-tscvz" Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.652082 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0cb62786-3029-4703-ae34-53fdfd5161d0-ovsdbserver-sb\") pod \"dnsmasq-dns-d7878c9d7-tscvz\" (UID: \"0cb62786-3029-4703-ae34-53fdfd5161d0\") " pod="openstack/dnsmasq-dns-d7878c9d7-tscvz" Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.652192 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0cb62786-3029-4703-ae34-53fdfd5161d0-ovsdbserver-nb\") pod \"dnsmasq-dns-d7878c9d7-tscvz\" (UID: \"0cb62786-3029-4703-ae34-53fdfd5161d0\") " pod="openstack/dnsmasq-dns-d7878c9d7-tscvz" Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.652274 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jrjh\" (UniqueName: \"kubernetes.io/projected/0cb62786-3029-4703-ae34-53fdfd5161d0-kube-api-access-9jrjh\") pod \"dnsmasq-dns-d7878c9d7-tscvz\" (UID: \"0cb62786-3029-4703-ae34-53fdfd5161d0\") " pod="openstack/dnsmasq-dns-d7878c9d7-tscvz" Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.652340 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0cb62786-3029-4703-ae34-53fdfd5161d0-dns-svc\") pod \"dnsmasq-dns-d7878c9d7-tscvz\" (UID: \"0cb62786-3029-4703-ae34-53fdfd5161d0\") " pod="openstack/dnsmasq-dns-d7878c9d7-tscvz" Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.652418 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cb62786-3029-4703-ae34-53fdfd5161d0-config\") pod \"dnsmasq-dns-d7878c9d7-tscvz\" (UID: \"0cb62786-3029-4703-ae34-53fdfd5161d0\") " pod="openstack/dnsmasq-dns-d7878c9d7-tscvz" Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.652485 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0cb62786-3029-4703-ae34-53fdfd5161d0-openstack-edpm-ipam\") pod \"dnsmasq-dns-d7878c9d7-tscvz\" (UID: \"0cb62786-3029-4703-ae34-53fdfd5161d0\") " pod="openstack/dnsmasq-dns-d7878c9d7-tscvz" Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.754506 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jrjh\" (UniqueName: \"kubernetes.io/projected/0cb62786-3029-4703-ae34-53fdfd5161d0-kube-api-access-9jrjh\") pod \"dnsmasq-dns-d7878c9d7-tscvz\" (UID: \"0cb62786-3029-4703-ae34-53fdfd5161d0\") " pod="openstack/dnsmasq-dns-d7878c9d7-tscvz" Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.754556 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0cb62786-3029-4703-ae34-53fdfd5161d0-dns-svc\") pod \"dnsmasq-dns-d7878c9d7-tscvz\" (UID: \"0cb62786-3029-4703-ae34-53fdfd5161d0\") " pod="openstack/dnsmasq-dns-d7878c9d7-tscvz" Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.754595 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cb62786-3029-4703-ae34-53fdfd5161d0-config\") pod \"dnsmasq-dns-d7878c9d7-tscvz\" (UID: \"0cb62786-3029-4703-ae34-53fdfd5161d0\") " pod="openstack/dnsmasq-dns-d7878c9d7-tscvz" Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.754614 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0cb62786-3029-4703-ae34-53fdfd5161d0-openstack-edpm-ipam\") pod \"dnsmasq-dns-d7878c9d7-tscvz\" (UID: \"0cb62786-3029-4703-ae34-53fdfd5161d0\") " pod="openstack/dnsmasq-dns-d7878c9d7-tscvz" Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.754724 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0cb62786-3029-4703-ae34-53fdfd5161d0-dns-swift-storage-0\") pod \"dnsmasq-dns-d7878c9d7-tscvz\" (UID: \"0cb62786-3029-4703-ae34-53fdfd5161d0\") " pod="openstack/dnsmasq-dns-d7878c9d7-tscvz" Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.754749 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0cb62786-3029-4703-ae34-53fdfd5161d0-ovsdbserver-sb\") pod \"dnsmasq-dns-d7878c9d7-tscvz\" (UID: \"0cb62786-3029-4703-ae34-53fdfd5161d0\") " pod="openstack/dnsmasq-dns-d7878c9d7-tscvz" Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.754777 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0cb62786-3029-4703-ae34-53fdfd5161d0-ovsdbserver-nb\") pod \"dnsmasq-dns-d7878c9d7-tscvz\" (UID: \"0cb62786-3029-4703-ae34-53fdfd5161d0\") " pod="openstack/dnsmasq-dns-d7878c9d7-tscvz" Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.755568 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0cb62786-3029-4703-ae34-53fdfd5161d0-ovsdbserver-nb\") pod \"dnsmasq-dns-d7878c9d7-tscvz\" (UID: \"0cb62786-3029-4703-ae34-53fdfd5161d0\") " pod="openstack/dnsmasq-dns-d7878c9d7-tscvz" Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.755697 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0cb62786-3029-4703-ae34-53fdfd5161d0-dns-svc\") pod \"dnsmasq-dns-d7878c9d7-tscvz\" (UID: \"0cb62786-3029-4703-ae34-53fdfd5161d0\") " pod="openstack/dnsmasq-dns-d7878c9d7-tscvz" Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.756169 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cb62786-3029-4703-ae34-53fdfd5161d0-config\") pod \"dnsmasq-dns-d7878c9d7-tscvz\" (UID: \"0cb62786-3029-4703-ae34-53fdfd5161d0\") " pod="openstack/dnsmasq-dns-d7878c9d7-tscvz" Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.756674 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0cb62786-3029-4703-ae34-53fdfd5161d0-openstack-edpm-ipam\") pod \"dnsmasq-dns-d7878c9d7-tscvz\" (UID: \"0cb62786-3029-4703-ae34-53fdfd5161d0\") " pod="openstack/dnsmasq-dns-d7878c9d7-tscvz" Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.757122 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0cb62786-3029-4703-ae34-53fdfd5161d0-dns-swift-storage-0\") pod \"dnsmasq-dns-d7878c9d7-tscvz\" (UID: \"0cb62786-3029-4703-ae34-53fdfd5161d0\") " pod="openstack/dnsmasq-dns-d7878c9d7-tscvz" Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.757219 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0cb62786-3029-4703-ae34-53fdfd5161d0-ovsdbserver-sb\") pod \"dnsmasq-dns-d7878c9d7-tscvz\" (UID: \"0cb62786-3029-4703-ae34-53fdfd5161d0\") " pod="openstack/dnsmasq-dns-d7878c9d7-tscvz" Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.783459 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jrjh\" (UniqueName: \"kubernetes.io/projected/0cb62786-3029-4703-ae34-53fdfd5161d0-kube-api-access-9jrjh\") pod \"dnsmasq-dns-d7878c9d7-tscvz\" (UID: \"0cb62786-3029-4703-ae34-53fdfd5161d0\") " pod="openstack/dnsmasq-dns-d7878c9d7-tscvz" Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.827903 4752 generic.go:334] "Generic (PLEG): container finished" podID="dd937dca-75fa-4ec8-b570-7e8d3a749654" containerID="21e442c192ac915b8fdd6f9fc0e7c935fe04f0df6ef7df185c0258c7f8234413" exitCode=0 Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.827978 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfbd48f-4mxp4" event={"ID":"dd937dca-75fa-4ec8-b570-7e8d3a749654","Type":"ContainerDied","Data":"21e442c192ac915b8fdd6f9fc0e7c935fe04f0df6ef7df185c0258c7f8234413"} Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.828021 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfbd48f-4mxp4" event={"ID":"dd937dca-75fa-4ec8-b570-7e8d3a749654","Type":"ContainerDied","Data":"01120b643d29b24c05c2bb3945f44e0ca18f51b89a7d65e069deeb1ad71514ed"} Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.828135 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01120b643d29b24c05c2bb3945f44e0ca18f51b89a7d65e069deeb1ad71514ed" Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.849552 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfbd48f-4mxp4" Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.898719 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d7878c9d7-tscvz" Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.957756 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd937dca-75fa-4ec8-b570-7e8d3a749654-ovsdbserver-sb\") pod \"dd937dca-75fa-4ec8-b570-7e8d3a749654\" (UID: \"dd937dca-75fa-4ec8-b570-7e8d3a749654\") " Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.957812 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd937dca-75fa-4ec8-b570-7e8d3a749654-ovsdbserver-nb\") pod \"dd937dca-75fa-4ec8-b570-7e8d3a749654\" (UID: \"dd937dca-75fa-4ec8-b570-7e8d3a749654\") " Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.957914 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fc5ln\" (UniqueName: \"kubernetes.io/projected/dd937dca-75fa-4ec8-b570-7e8d3a749654-kube-api-access-fc5ln\") pod \"dd937dca-75fa-4ec8-b570-7e8d3a749654\" (UID: \"dd937dca-75fa-4ec8-b570-7e8d3a749654\") " Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.957998 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd937dca-75fa-4ec8-b570-7e8d3a749654-dns-svc\") pod \"dd937dca-75fa-4ec8-b570-7e8d3a749654\" (UID: \"dd937dca-75fa-4ec8-b570-7e8d3a749654\") " Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.958092 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd937dca-75fa-4ec8-b570-7e8d3a749654-config\") pod \"dd937dca-75fa-4ec8-b570-7e8d3a749654\" (UID: \"dd937dca-75fa-4ec8-b570-7e8d3a749654\") " Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.958165 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dd937dca-75fa-4ec8-b570-7e8d3a749654-dns-swift-storage-0\") pod \"dd937dca-75fa-4ec8-b570-7e8d3a749654\" (UID: \"dd937dca-75fa-4ec8-b570-7e8d3a749654\") " Jan 22 10:48:42 crc kubenswrapper[4752]: I0122 10:48:42.974685 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd937dca-75fa-4ec8-b570-7e8d3a749654-kube-api-access-fc5ln" (OuterVolumeSpecName: "kube-api-access-fc5ln") pod "dd937dca-75fa-4ec8-b570-7e8d3a749654" (UID: "dd937dca-75fa-4ec8-b570-7e8d3a749654"). InnerVolumeSpecName "kube-api-access-fc5ln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:48:43 crc kubenswrapper[4752]: I0122 10:48:43.021816 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd937dca-75fa-4ec8-b570-7e8d3a749654-config" (OuterVolumeSpecName: "config") pod "dd937dca-75fa-4ec8-b570-7e8d3a749654" (UID: "dd937dca-75fa-4ec8-b570-7e8d3a749654"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:48:43 crc kubenswrapper[4752]: I0122 10:48:43.042974 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd937dca-75fa-4ec8-b570-7e8d3a749654-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dd937dca-75fa-4ec8-b570-7e8d3a749654" (UID: "dd937dca-75fa-4ec8-b570-7e8d3a749654"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:48:43 crc kubenswrapper[4752]: I0122 10:48:43.046188 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd937dca-75fa-4ec8-b570-7e8d3a749654-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dd937dca-75fa-4ec8-b570-7e8d3a749654" (UID: "dd937dca-75fa-4ec8-b570-7e8d3a749654"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:48:43 crc kubenswrapper[4752]: I0122 10:48:43.053177 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd937dca-75fa-4ec8-b570-7e8d3a749654-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dd937dca-75fa-4ec8-b570-7e8d3a749654" (UID: "dd937dca-75fa-4ec8-b570-7e8d3a749654"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:48:43 crc kubenswrapper[4752]: I0122 10:48:43.061093 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd937dca-75fa-4ec8-b570-7e8d3a749654-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:43 crc kubenswrapper[4752]: I0122 10:48:43.061124 4752 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dd937dca-75fa-4ec8-b570-7e8d3a749654-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:43 crc kubenswrapper[4752]: I0122 10:48:43.061136 4752 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd937dca-75fa-4ec8-b570-7e8d3a749654-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:43 crc kubenswrapper[4752]: I0122 10:48:43.061146 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fc5ln\" (UniqueName: \"kubernetes.io/projected/dd937dca-75fa-4ec8-b570-7e8d3a749654-kube-api-access-fc5ln\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:43 crc kubenswrapper[4752]: I0122 10:48:43.061317 4752 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd937dca-75fa-4ec8-b570-7e8d3a749654-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:43 crc kubenswrapper[4752]: I0122 10:48:43.067490 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd937dca-75fa-4ec8-b570-7e8d3a749654-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dd937dca-75fa-4ec8-b570-7e8d3a749654" (UID: "dd937dca-75fa-4ec8-b570-7e8d3a749654"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:48:43 crc kubenswrapper[4752]: I0122 10:48:43.165432 4752 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd937dca-75fa-4ec8-b570-7e8d3a749654-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:43 crc kubenswrapper[4752]: I0122 10:48:43.376937 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d7878c9d7-tscvz"] Jan 22 10:48:43 crc kubenswrapper[4752]: W0122 10:48:43.380083 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0cb62786_3029_4703_ae34_53fdfd5161d0.slice/crio-ecbbfc7ae483c8d7cc00c82b9e232aa7e5fe2811a00d0eea4021647b13d372ab WatchSource:0}: Error finding container ecbbfc7ae483c8d7cc00c82b9e232aa7e5fe2811a00d0eea4021647b13d372ab: Status 404 returned error can't find the container with id ecbbfc7ae483c8d7cc00c82b9e232aa7e5fe2811a00d0eea4021647b13d372ab Jan 22 10:48:43 crc kubenswrapper[4752]: I0122 10:48:43.836553 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dphbx" podUID="c849f4e5-68ce-41d2-9dd3-5766b881f54c" containerName="registry-server" containerID="cri-o://bdc6ddb7f5939c3901f3fd34b2f72e0f85357de9d0b7e5d10719a431dac505ac" gracePeriod=2 Jan 22 10:48:43 crc kubenswrapper[4752]: I0122 10:48:43.837055 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d7878c9d7-tscvz" event={"ID":"0cb62786-3029-4703-ae34-53fdfd5161d0","Type":"ContainerStarted","Data":"ecbbfc7ae483c8d7cc00c82b9e232aa7e5fe2811a00d0eea4021647b13d372ab"} Jan 22 10:48:43 crc kubenswrapper[4752]: I0122 10:48:43.837144 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfbd48f-4mxp4" Jan 22 10:48:43 crc kubenswrapper[4752]: I0122 10:48:43.863640 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfbd48f-4mxp4"] Jan 22 10:48:43 crc kubenswrapper[4752]: I0122 10:48:43.871639 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fcfbd48f-4mxp4"] Jan 22 10:48:45 crc kubenswrapper[4752]: I0122 10:48:45.112787 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd937dca-75fa-4ec8-b570-7e8d3a749654" path="/var/lib/kubelet/pods/dd937dca-75fa-4ec8-b570-7e8d3a749654/volumes" Jan 22 10:48:45 crc kubenswrapper[4752]: I0122 10:48:45.476826 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dphbx" Jan 22 10:48:45 crc kubenswrapper[4752]: I0122 10:48:45.646461 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c849f4e5-68ce-41d2-9dd3-5766b881f54c-utilities\") pod \"c849f4e5-68ce-41d2-9dd3-5766b881f54c\" (UID: \"c849f4e5-68ce-41d2-9dd3-5766b881f54c\") " Jan 22 10:48:45 crc kubenswrapper[4752]: I0122 10:48:45.646851 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c849f4e5-68ce-41d2-9dd3-5766b881f54c-catalog-content\") pod \"c849f4e5-68ce-41d2-9dd3-5766b881f54c\" (UID: \"c849f4e5-68ce-41d2-9dd3-5766b881f54c\") " Jan 22 10:48:45 crc kubenswrapper[4752]: I0122 10:48:45.646923 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj76t\" (UniqueName: \"kubernetes.io/projected/c849f4e5-68ce-41d2-9dd3-5766b881f54c-kube-api-access-pj76t\") pod \"c849f4e5-68ce-41d2-9dd3-5766b881f54c\" (UID: \"c849f4e5-68ce-41d2-9dd3-5766b881f54c\") " Jan 22 10:48:45 crc kubenswrapper[4752]: I0122 10:48:45.647404 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c849f4e5-68ce-41d2-9dd3-5766b881f54c-utilities" (OuterVolumeSpecName: "utilities") pod "c849f4e5-68ce-41d2-9dd3-5766b881f54c" (UID: "c849f4e5-68ce-41d2-9dd3-5766b881f54c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:48:45 crc kubenswrapper[4752]: I0122 10:48:45.647599 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c849f4e5-68ce-41d2-9dd3-5766b881f54c-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:45 crc kubenswrapper[4752]: I0122 10:48:45.653279 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c849f4e5-68ce-41d2-9dd3-5766b881f54c-kube-api-access-pj76t" (OuterVolumeSpecName: "kube-api-access-pj76t") pod "c849f4e5-68ce-41d2-9dd3-5766b881f54c" (UID: "c849f4e5-68ce-41d2-9dd3-5766b881f54c"). InnerVolumeSpecName "kube-api-access-pj76t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:48:45 crc kubenswrapper[4752]: I0122 10:48:45.750187 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj76t\" (UniqueName: \"kubernetes.io/projected/c849f4e5-68ce-41d2-9dd3-5766b881f54c-kube-api-access-pj76t\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:45 crc kubenswrapper[4752]: I0122 10:48:45.779183 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c849f4e5-68ce-41d2-9dd3-5766b881f54c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c849f4e5-68ce-41d2-9dd3-5766b881f54c" (UID: "c849f4e5-68ce-41d2-9dd3-5766b881f54c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:48:45 crc kubenswrapper[4752]: I0122 10:48:45.851668 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c849f4e5-68ce-41d2-9dd3-5766b881f54c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:45 crc kubenswrapper[4752]: I0122 10:48:45.867132 4752 generic.go:334] "Generic (PLEG): container finished" podID="0cb62786-3029-4703-ae34-53fdfd5161d0" containerID="89e9b141cf1d2ffbb7b4b4b2f8e5c3bfd02aaefbd8c84235a16fd026e7b0abe4" exitCode=0 Jan 22 10:48:45 crc kubenswrapper[4752]: I0122 10:48:45.867206 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d7878c9d7-tscvz" event={"ID":"0cb62786-3029-4703-ae34-53fdfd5161d0","Type":"ContainerDied","Data":"89e9b141cf1d2ffbb7b4b4b2f8e5c3bfd02aaefbd8c84235a16fd026e7b0abe4"} Jan 22 10:48:45 crc kubenswrapper[4752]: I0122 10:48:45.870834 4752 generic.go:334] "Generic (PLEG): container finished" podID="c849f4e5-68ce-41d2-9dd3-5766b881f54c" containerID="bdc6ddb7f5939c3901f3fd34b2f72e0f85357de9d0b7e5d10719a431dac505ac" exitCode=0 Jan 22 10:48:45 crc kubenswrapper[4752]: I0122 10:48:45.870872 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dphbx" event={"ID":"c849f4e5-68ce-41d2-9dd3-5766b881f54c","Type":"ContainerDied","Data":"bdc6ddb7f5939c3901f3fd34b2f72e0f85357de9d0b7e5d10719a431dac505ac"} Jan 22 10:48:45 crc kubenswrapper[4752]: I0122 10:48:45.870889 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dphbx" event={"ID":"c849f4e5-68ce-41d2-9dd3-5766b881f54c","Type":"ContainerDied","Data":"f18c9490e4a7822e8bc19f4578d423a1744c8f9ef4435b058104294543501d8e"} Jan 22 10:48:45 crc kubenswrapper[4752]: I0122 10:48:45.870906 4752 scope.go:117] "RemoveContainer" containerID="bdc6ddb7f5939c3901f3fd34b2f72e0f85357de9d0b7e5d10719a431dac505ac" Jan 22 10:48:45 crc kubenswrapper[4752]: I0122 10:48:45.871052 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dphbx" Jan 22 10:48:45 crc kubenswrapper[4752]: I0122 10:48:45.900096 4752 scope.go:117] "RemoveContainer" containerID="ed2ea99ea6dfa2f582a86df43d001ed9ed3bef13bd744fc6beb17f95888f09a5" Jan 22 10:48:46 crc kubenswrapper[4752]: I0122 10:48:46.021752 4752 scope.go:117] "RemoveContainer" containerID="ea8423903d189185c622ba5c9affd28b940f17acd96b703d5c2fef637cc0778d" Jan 22 10:48:46 crc kubenswrapper[4752]: I0122 10:48:46.062203 4752 scope.go:117] "RemoveContainer" containerID="bdc6ddb7f5939c3901f3fd34b2f72e0f85357de9d0b7e5d10719a431dac505ac" Jan 22 10:48:46 crc kubenswrapper[4752]: E0122 10:48:46.063045 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdc6ddb7f5939c3901f3fd34b2f72e0f85357de9d0b7e5d10719a431dac505ac\": container with ID starting with bdc6ddb7f5939c3901f3fd34b2f72e0f85357de9d0b7e5d10719a431dac505ac not found: ID does not exist" containerID="bdc6ddb7f5939c3901f3fd34b2f72e0f85357de9d0b7e5d10719a431dac505ac" Jan 22 10:48:46 crc kubenswrapper[4752]: I0122 10:48:46.063083 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdc6ddb7f5939c3901f3fd34b2f72e0f85357de9d0b7e5d10719a431dac505ac"} err="failed to get container status \"bdc6ddb7f5939c3901f3fd34b2f72e0f85357de9d0b7e5d10719a431dac505ac\": rpc error: code = NotFound desc = could not find container \"bdc6ddb7f5939c3901f3fd34b2f72e0f85357de9d0b7e5d10719a431dac505ac\": container with ID starting with bdc6ddb7f5939c3901f3fd34b2f72e0f85357de9d0b7e5d10719a431dac505ac not found: ID does not exist" Jan 22 10:48:46 crc kubenswrapper[4752]: I0122 10:48:46.063109 4752 scope.go:117] "RemoveContainer" containerID="ed2ea99ea6dfa2f582a86df43d001ed9ed3bef13bd744fc6beb17f95888f09a5" Jan 22 10:48:46 crc kubenswrapper[4752]: E0122 10:48:46.063823 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed2ea99ea6dfa2f582a86df43d001ed9ed3bef13bd744fc6beb17f95888f09a5\": container with ID starting with ed2ea99ea6dfa2f582a86df43d001ed9ed3bef13bd744fc6beb17f95888f09a5 not found: ID does not exist" containerID="ed2ea99ea6dfa2f582a86df43d001ed9ed3bef13bd744fc6beb17f95888f09a5" Jan 22 10:48:46 crc kubenswrapper[4752]: I0122 10:48:46.063850 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed2ea99ea6dfa2f582a86df43d001ed9ed3bef13bd744fc6beb17f95888f09a5"} err="failed to get container status \"ed2ea99ea6dfa2f582a86df43d001ed9ed3bef13bd744fc6beb17f95888f09a5\": rpc error: code = NotFound desc = could not find container \"ed2ea99ea6dfa2f582a86df43d001ed9ed3bef13bd744fc6beb17f95888f09a5\": container with ID starting with ed2ea99ea6dfa2f582a86df43d001ed9ed3bef13bd744fc6beb17f95888f09a5 not found: ID does not exist" Jan 22 10:48:46 crc kubenswrapper[4752]: I0122 10:48:46.063886 4752 scope.go:117] "RemoveContainer" containerID="ea8423903d189185c622ba5c9affd28b940f17acd96b703d5c2fef637cc0778d" Jan 22 10:48:46 crc kubenswrapper[4752]: E0122 10:48:46.064152 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea8423903d189185c622ba5c9affd28b940f17acd96b703d5c2fef637cc0778d\": container with ID starting with ea8423903d189185c622ba5c9affd28b940f17acd96b703d5c2fef637cc0778d not found: ID does not exist" containerID="ea8423903d189185c622ba5c9affd28b940f17acd96b703d5c2fef637cc0778d" Jan 22 10:48:46 crc kubenswrapper[4752]: I0122 10:48:46.064172 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea8423903d189185c622ba5c9affd28b940f17acd96b703d5c2fef637cc0778d"} err="failed to get container status \"ea8423903d189185c622ba5c9affd28b940f17acd96b703d5c2fef637cc0778d\": rpc error: code = NotFound desc = could not find container \"ea8423903d189185c622ba5c9affd28b940f17acd96b703d5c2fef637cc0778d\": container with ID starting with ea8423903d189185c622ba5c9affd28b940f17acd96b703d5c2fef637cc0778d not found: ID does not exist" Jan 22 10:48:46 crc kubenswrapper[4752]: I0122 10:48:46.064533 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dphbx"] Jan 22 10:48:46 crc kubenswrapper[4752]: I0122 10:48:46.075054 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dphbx"] Jan 22 10:48:46 crc kubenswrapper[4752]: I0122 10:48:46.897027 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d7878c9d7-tscvz" event={"ID":"0cb62786-3029-4703-ae34-53fdfd5161d0","Type":"ContainerStarted","Data":"88e6b5ebb758d9ac78b64b075aa0fabfa17fa89dc0117d7185926f042013fcba"} Jan 22 10:48:46 crc kubenswrapper[4752]: I0122 10:48:46.897577 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d7878c9d7-tscvz" Jan 22 10:48:46 crc kubenswrapper[4752]: I0122 10:48:46.927207 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d7878c9d7-tscvz" podStartSLOduration=4.927185423 podStartE2EDuration="4.927185423s" podCreationTimestamp="2026-01-22 10:48:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:48:46.917405289 +0000 UTC m=+1406.147348197" watchObservedRunningTime="2026-01-22 10:48:46.927185423 +0000 UTC m=+1406.157128331" Jan 22 10:48:47 crc kubenswrapper[4752]: I0122 10:48:47.118232 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c849f4e5-68ce-41d2-9dd3-5766b881f54c" path="/var/lib/kubelet/pods/c849f4e5-68ce-41d2-9dd3-5766b881f54c/volumes" Jan 22 10:48:47 crc kubenswrapper[4752]: I0122 10:48:47.738551 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-fcfbd48f-4mxp4" podUID="dd937dca-75fa-4ec8-b570-7e8d3a749654" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.223:5353: i/o timeout" Jan 22 10:48:52 crc kubenswrapper[4752]: I0122 10:48:52.900748 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d7878c9d7-tscvz" Jan 22 10:48:52 crc kubenswrapper[4752]: I0122 10:48:52.974084 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cd67b459-rntwn"] Jan 22 10:48:52 crc kubenswrapper[4752]: I0122 10:48:52.974348 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5cd67b459-rntwn" podUID="35a18247-289a-4df5-b580-9219c3eb828d" containerName="dnsmasq-dns" containerID="cri-o://e0b95e82f11bc76aedace619b7cc5f85555e64693bd1d0e6ec74b2e0e6e7e0d5" gracePeriod=10 Jan 22 10:48:53 crc kubenswrapper[4752]: I0122 10:48:53.985902 4752 generic.go:334] "Generic (PLEG): container finished" podID="35a18247-289a-4df5-b580-9219c3eb828d" containerID="e0b95e82f11bc76aedace619b7cc5f85555e64693bd1d0e6ec74b2e0e6e7e0d5" exitCode=0 Jan 22 10:48:53 crc kubenswrapper[4752]: I0122 10:48:53.985991 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cd67b459-rntwn" event={"ID":"35a18247-289a-4df5-b580-9219c3eb828d","Type":"ContainerDied","Data":"e0b95e82f11bc76aedace619b7cc5f85555e64693bd1d0e6ec74b2e0e6e7e0d5"} Jan 22 10:48:57 crc kubenswrapper[4752]: I0122 10:48:57.218728 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5cd67b459-rntwn" podUID="35a18247-289a-4df5-b580-9219c3eb828d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.234:5353: connect: connection refused" Jan 22 10:48:57 crc kubenswrapper[4752]: I0122 10:48:57.561253 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd67b459-rntwn" Jan 22 10:48:57 crc kubenswrapper[4752]: I0122 10:48:57.723743 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:48:57 crc kubenswrapper[4752]: I0122 10:48:57.723808 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:48:57 crc kubenswrapper[4752]: I0122 10:48:57.740909 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x57t\" (UniqueName: \"kubernetes.io/projected/35a18247-289a-4df5-b580-9219c3eb828d-kube-api-access-9x57t\") pod \"35a18247-289a-4df5-b580-9219c3eb828d\" (UID: \"35a18247-289a-4df5-b580-9219c3eb828d\") " Jan 22 10:48:57 crc kubenswrapper[4752]: I0122 10:48:57.742014 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-dns-swift-storage-0\") pod \"35a18247-289a-4df5-b580-9219c3eb828d\" (UID: \"35a18247-289a-4df5-b580-9219c3eb828d\") " Jan 22 10:48:57 crc kubenswrapper[4752]: I0122 10:48:57.742069 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-openstack-edpm-ipam\") pod \"35a18247-289a-4df5-b580-9219c3eb828d\" (UID: \"35a18247-289a-4df5-b580-9219c3eb828d\") " Jan 22 10:48:57 crc kubenswrapper[4752]: I0122 10:48:57.742095 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-ovsdbserver-nb\") pod \"35a18247-289a-4df5-b580-9219c3eb828d\" (UID: \"35a18247-289a-4df5-b580-9219c3eb828d\") " Jan 22 10:48:57 crc kubenswrapper[4752]: I0122 10:48:57.742233 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-dns-svc\") pod \"35a18247-289a-4df5-b580-9219c3eb828d\" (UID: \"35a18247-289a-4df5-b580-9219c3eb828d\") " Jan 22 10:48:57 crc kubenswrapper[4752]: I0122 10:48:57.742319 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-ovsdbserver-sb\") pod \"35a18247-289a-4df5-b580-9219c3eb828d\" (UID: \"35a18247-289a-4df5-b580-9219c3eb828d\") " Jan 22 10:48:57 crc kubenswrapper[4752]: I0122 10:48:57.742510 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-config\") pod \"35a18247-289a-4df5-b580-9219c3eb828d\" (UID: \"35a18247-289a-4df5-b580-9219c3eb828d\") " Jan 22 10:48:57 crc kubenswrapper[4752]: I0122 10:48:57.747356 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35a18247-289a-4df5-b580-9219c3eb828d-kube-api-access-9x57t" (OuterVolumeSpecName: "kube-api-access-9x57t") pod "35a18247-289a-4df5-b580-9219c3eb828d" (UID: "35a18247-289a-4df5-b580-9219c3eb828d"). InnerVolumeSpecName "kube-api-access-9x57t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:48:57 crc kubenswrapper[4752]: I0122 10:48:57.799548 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "35a18247-289a-4df5-b580-9219c3eb828d" (UID: "35a18247-289a-4df5-b580-9219c3eb828d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:48:57 crc kubenswrapper[4752]: I0122 10:48:57.804962 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "35a18247-289a-4df5-b580-9219c3eb828d" (UID: "35a18247-289a-4df5-b580-9219c3eb828d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:48:57 crc kubenswrapper[4752]: I0122 10:48:57.806832 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "35a18247-289a-4df5-b580-9219c3eb828d" (UID: "35a18247-289a-4df5-b580-9219c3eb828d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:48:57 crc kubenswrapper[4752]: I0122 10:48:57.808044 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "35a18247-289a-4df5-b580-9219c3eb828d" (UID: "35a18247-289a-4df5-b580-9219c3eb828d"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:48:57 crc kubenswrapper[4752]: I0122 10:48:57.809922 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-config" (OuterVolumeSpecName: "config") pod "35a18247-289a-4df5-b580-9219c3eb828d" (UID: "35a18247-289a-4df5-b580-9219c3eb828d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:48:57 crc kubenswrapper[4752]: I0122 10:48:57.820201 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "35a18247-289a-4df5-b580-9219c3eb828d" (UID: "35a18247-289a-4df5-b580-9219c3eb828d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 10:48:57 crc kubenswrapper[4752]: I0122 10:48:57.845067 4752 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:57 crc kubenswrapper[4752]: I0122 10:48:57.845097 4752 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:57 crc kubenswrapper[4752]: I0122 10:48:57.845110 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-config\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:57 crc kubenswrapper[4752]: I0122 10:48:57.845120 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x57t\" (UniqueName: \"kubernetes.io/projected/35a18247-289a-4df5-b580-9219c3eb828d-kube-api-access-9x57t\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:57 crc kubenswrapper[4752]: I0122 10:48:57.845131 4752 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:57 crc kubenswrapper[4752]: I0122 10:48:57.845140 4752 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:57 crc kubenswrapper[4752]: I0122 10:48:57.845149 4752 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35a18247-289a-4df5-b580-9219c3eb828d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 10:48:58 crc kubenswrapper[4752]: I0122 10:48:58.032440 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cd67b459-rntwn" event={"ID":"35a18247-289a-4df5-b580-9219c3eb828d","Type":"ContainerDied","Data":"d2d33f70ff9fdf6a5e02d39f478c8debb21f0e5ec66ac02ccc1cc5999e5a49e6"} Jan 22 10:48:58 crc kubenswrapper[4752]: I0122 10:48:58.032522 4752 scope.go:117] "RemoveContainer" containerID="e0b95e82f11bc76aedace619b7cc5f85555e64693bd1d0e6ec74b2e0e6e7e0d5" Jan 22 10:48:58 crc kubenswrapper[4752]: I0122 10:48:58.032531 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd67b459-rntwn" Jan 22 10:48:58 crc kubenswrapper[4752]: I0122 10:48:58.058845 4752 scope.go:117] "RemoveContainer" containerID="7af52a11d41443853341d0241d6a55262671c0e450e8faa54182fb619132d756" Jan 22 10:48:58 crc kubenswrapper[4752]: I0122 10:48:58.079784 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cd67b459-rntwn"] Jan 22 10:48:58 crc kubenswrapper[4752]: I0122 10:48:58.089980 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5cd67b459-rntwn"] Jan 22 10:48:59 crc kubenswrapper[4752]: I0122 10:48:59.112165 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35a18247-289a-4df5-b580-9219c3eb828d" path="/var/lib/kubelet/pods/35a18247-289a-4df5-b580-9219c3eb828d/volumes" Jan 22 10:49:00 crc kubenswrapper[4752]: I0122 10:49:00.054771 4752 generic.go:334] "Generic (PLEG): container finished" podID="7a6ccec4-3c1a-4de7-9c41-017ff51001e7" containerID="89c06cc1417a9ff9593613dfd2a678c7a080966c5b89a720764c5c4784b51339" exitCode=0 Jan 22 10:49:00 crc kubenswrapper[4752]: I0122 10:49:00.054816 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7a6ccec4-3c1a-4de7-9c41-017ff51001e7","Type":"ContainerDied","Data":"89c06cc1417a9ff9593613dfd2a678c7a080966c5b89a720764c5c4784b51339"} Jan 22 10:49:01 crc kubenswrapper[4752]: I0122 10:49:01.065917 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7a6ccec4-3c1a-4de7-9c41-017ff51001e7","Type":"ContainerStarted","Data":"b2920f446a561e08b09c94e70df323c36b4a5347642ad5ddd6b1e076f682ee3c"} Jan 22 10:49:01 crc kubenswrapper[4752]: I0122 10:49:01.067757 4752 generic.go:334] "Generic (PLEG): container finished" podID="af510fcf-f737-4857-86f6-f1d486c5298a" containerID="41d940a7c13d91c743f24e019c214f2782a9f64154c33696964c8cae72b49cc3" exitCode=0 Jan 22 10:49:01 crc kubenswrapper[4752]: I0122 10:49:01.067804 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"af510fcf-f737-4857-86f6-f1d486c5298a","Type":"ContainerDied","Data":"41d940a7c13d91c743f24e019c214f2782a9f64154c33696964c8cae72b49cc3"} Jan 22 10:49:01 crc kubenswrapper[4752]: I0122 10:49:01.068024 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 22 10:49:01 crc kubenswrapper[4752]: I0122 10:49:01.122216 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=39.122199312 podStartE2EDuration="39.122199312s" podCreationTimestamp="2026-01-22 10:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:49:01.09870079 +0000 UTC m=+1420.328643708" watchObservedRunningTime="2026-01-22 10:49:01.122199312 +0000 UTC m=+1420.352142220" Jan 22 10:49:02 crc kubenswrapper[4752]: I0122 10:49:02.078347 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"af510fcf-f737-4857-86f6-f1d486c5298a","Type":"ContainerStarted","Data":"ccc6077dbbc71044bd233ce3ef5f60720fb15c8b5301de41bb107050d6529988"} Jan 22 10:49:02 crc kubenswrapper[4752]: I0122 10:49:02.078831 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:49:02 crc kubenswrapper[4752]: I0122 10:49:02.112817 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.112798716 podStartE2EDuration="39.112798716s" podCreationTimestamp="2026-01-22 10:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 10:49:02.103065832 +0000 UTC m=+1421.333008750" watchObservedRunningTime="2026-01-22 10:49:02.112798716 +0000 UTC m=+1421.342741624" Jan 22 10:49:11 crc kubenswrapper[4752]: I0122 10:49:11.223813 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt"] Jan 22 10:49:11 crc kubenswrapper[4752]: E0122 10:49:11.224757 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a18247-289a-4df5-b580-9219c3eb828d" containerName="init" Jan 22 10:49:11 crc kubenswrapper[4752]: I0122 10:49:11.224771 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a18247-289a-4df5-b580-9219c3eb828d" containerName="init" Jan 22 10:49:11 crc kubenswrapper[4752]: E0122 10:49:11.224785 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd937dca-75fa-4ec8-b570-7e8d3a749654" containerName="dnsmasq-dns" Jan 22 10:49:11 crc kubenswrapper[4752]: I0122 10:49:11.224791 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd937dca-75fa-4ec8-b570-7e8d3a749654" containerName="dnsmasq-dns" Jan 22 10:49:11 crc kubenswrapper[4752]: E0122 10:49:11.224801 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c849f4e5-68ce-41d2-9dd3-5766b881f54c" containerName="extract-content" Jan 22 10:49:11 crc kubenswrapper[4752]: I0122 10:49:11.224807 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="c849f4e5-68ce-41d2-9dd3-5766b881f54c" containerName="extract-content" Jan 22 10:49:11 crc kubenswrapper[4752]: E0122 10:49:11.224815 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a18247-289a-4df5-b580-9219c3eb828d" containerName="dnsmasq-dns" Jan 22 10:49:11 crc kubenswrapper[4752]: I0122 10:49:11.224820 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a18247-289a-4df5-b580-9219c3eb828d" containerName="dnsmasq-dns" Jan 22 10:49:11 crc kubenswrapper[4752]: E0122 10:49:11.224830 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd937dca-75fa-4ec8-b570-7e8d3a749654" containerName="init" Jan 22 10:49:11 crc kubenswrapper[4752]: I0122 10:49:11.224835 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd937dca-75fa-4ec8-b570-7e8d3a749654" containerName="init" Jan 22 10:49:11 crc kubenswrapper[4752]: E0122 10:49:11.224843 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c849f4e5-68ce-41d2-9dd3-5766b881f54c" containerName="registry-server" Jan 22 10:49:11 crc kubenswrapper[4752]: I0122 10:49:11.224849 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="c849f4e5-68ce-41d2-9dd3-5766b881f54c" containerName="registry-server" Jan 22 10:49:11 crc kubenswrapper[4752]: E0122 10:49:11.224941 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c849f4e5-68ce-41d2-9dd3-5766b881f54c" containerName="extract-utilities" Jan 22 10:49:11 crc kubenswrapper[4752]: I0122 10:49:11.224948 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="c849f4e5-68ce-41d2-9dd3-5766b881f54c" containerName="extract-utilities" Jan 22 10:49:11 crc kubenswrapper[4752]: I0122 10:49:11.225125 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="c849f4e5-68ce-41d2-9dd3-5766b881f54c" containerName="registry-server" Jan 22 10:49:11 crc kubenswrapper[4752]: I0122 10:49:11.225149 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd937dca-75fa-4ec8-b570-7e8d3a749654" containerName="dnsmasq-dns" Jan 22 10:49:11 crc kubenswrapper[4752]: I0122 10:49:11.225162 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="35a18247-289a-4df5-b580-9219c3eb828d" containerName="dnsmasq-dns" Jan 22 10:49:11 crc kubenswrapper[4752]: I0122 10:49:11.227135 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt" Jan 22 10:49:11 crc kubenswrapper[4752]: I0122 10:49:11.229692 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j8c6q" Jan 22 10:49:11 crc kubenswrapper[4752]: I0122 10:49:11.229701 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 10:49:11 crc kubenswrapper[4752]: I0122 10:49:11.230143 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 10:49:11 crc kubenswrapper[4752]: I0122 10:49:11.230497 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 10:49:11 crc kubenswrapper[4752]: I0122 10:49:11.245873 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt"] Jan 22 10:49:11 crc kubenswrapper[4752]: I0122 10:49:11.353766 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgbgr\" (UniqueName: \"kubernetes.io/projected/ece83eca-f252-4c41-8f79-7a0dfd13ffb1-kube-api-access-fgbgr\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt\" (UID: \"ece83eca-f252-4c41-8f79-7a0dfd13ffb1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt" Jan 22 10:49:11 crc kubenswrapper[4752]: I0122 10:49:11.353938 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ece83eca-f252-4c41-8f79-7a0dfd13ffb1-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt\" (UID: \"ece83eca-f252-4c41-8f79-7a0dfd13ffb1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt" Jan 22 10:49:11 crc kubenswrapper[4752]: I0122 10:49:11.353975 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ece83eca-f252-4c41-8f79-7a0dfd13ffb1-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt\" (UID: \"ece83eca-f252-4c41-8f79-7a0dfd13ffb1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt" Jan 22 10:49:11 crc kubenswrapper[4752]: I0122 10:49:11.354034 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ece83eca-f252-4c41-8f79-7a0dfd13ffb1-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt\" (UID: \"ece83eca-f252-4c41-8f79-7a0dfd13ffb1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt" Jan 22 10:49:11 crc kubenswrapper[4752]: I0122 10:49:11.456086 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgbgr\" (UniqueName: \"kubernetes.io/projected/ece83eca-f252-4c41-8f79-7a0dfd13ffb1-kube-api-access-fgbgr\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt\" (UID: \"ece83eca-f252-4c41-8f79-7a0dfd13ffb1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt" Jan 22 10:49:11 crc kubenswrapper[4752]: I0122 10:49:11.456222 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ece83eca-f252-4c41-8f79-7a0dfd13ffb1-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt\" (UID: \"ece83eca-f252-4c41-8f79-7a0dfd13ffb1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt" Jan 22 10:49:11 crc kubenswrapper[4752]: I0122 10:49:11.456257 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ece83eca-f252-4c41-8f79-7a0dfd13ffb1-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt\" (UID: \"ece83eca-f252-4c41-8f79-7a0dfd13ffb1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt" Jan 22 10:49:11 crc kubenswrapper[4752]: I0122 10:49:11.456312 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ece83eca-f252-4c41-8f79-7a0dfd13ffb1-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt\" (UID: \"ece83eca-f252-4c41-8f79-7a0dfd13ffb1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt" Jan 22 10:49:11 crc kubenswrapper[4752]: I0122 10:49:11.462571 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ece83eca-f252-4c41-8f79-7a0dfd13ffb1-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt\" (UID: \"ece83eca-f252-4c41-8f79-7a0dfd13ffb1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt" Jan 22 10:49:11 crc kubenswrapper[4752]: I0122 10:49:11.463059 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ece83eca-f252-4c41-8f79-7a0dfd13ffb1-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt\" (UID: \"ece83eca-f252-4c41-8f79-7a0dfd13ffb1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt" Jan 22 10:49:11 crc kubenswrapper[4752]: I0122 10:49:11.463955 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ece83eca-f252-4c41-8f79-7a0dfd13ffb1-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt\" (UID: \"ece83eca-f252-4c41-8f79-7a0dfd13ffb1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt" Jan 22 10:49:11 crc kubenswrapper[4752]: I0122 10:49:11.479843 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgbgr\" (UniqueName: \"kubernetes.io/projected/ece83eca-f252-4c41-8f79-7a0dfd13ffb1-kube-api-access-fgbgr\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt\" (UID: \"ece83eca-f252-4c41-8f79-7a0dfd13ffb1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt" Jan 22 10:49:11 crc kubenswrapper[4752]: I0122 10:49:11.561412 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt" Jan 22 10:49:12 crc kubenswrapper[4752]: I0122 10:49:12.164915 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt"] Jan 22 10:49:13 crc kubenswrapper[4752]: I0122 10:49:13.188432 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="7a6ccec4-3c1a-4de7-9c41-017ff51001e7" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.232:5671: connect: connection refused" Jan 22 10:49:13 crc kubenswrapper[4752]: I0122 10:49:13.217427 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt" event={"ID":"ece83eca-f252-4c41-8f79-7a0dfd13ffb1","Type":"ContainerStarted","Data":"08aae7c1e1fab8bdbdbd8cedd7658c5af2026cb876e1c5eaa65f0c8c778f5828"} Jan 22 10:49:14 crc kubenswrapper[4752]: I0122 10:49:14.158091 4752 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="af510fcf-f737-4857-86f6-f1d486c5298a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.233:5671: connect: connection refused" Jan 22 10:49:23 crc kubenswrapper[4752]: I0122 10:49:23.189122 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 22 10:49:23 crc kubenswrapper[4752]: I0122 10:49:23.801572 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 10:49:24 crc kubenswrapper[4752]: I0122 10:49:24.156719 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 22 10:49:24 crc kubenswrapper[4752]: I0122 10:49:24.382764 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt" event={"ID":"ece83eca-f252-4c41-8f79-7a0dfd13ffb1","Type":"ContainerStarted","Data":"f481377982f19ba95321e947258cd632bf1c4865d6a3d7283575c9e2215ded36"} Jan 22 10:49:24 crc kubenswrapper[4752]: I0122 10:49:24.401198 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt" podStartSLOduration=1.781022062 podStartE2EDuration="13.401164235s" podCreationTimestamp="2026-01-22 10:49:11 +0000 UTC" firstStartedPulling="2026-01-22 10:49:12.178582051 +0000 UTC m=+1431.408524959" lastFinishedPulling="2026-01-22 10:49:23.798724214 +0000 UTC m=+1443.028667132" observedRunningTime="2026-01-22 10:49:24.397945511 +0000 UTC m=+1443.627888429" watchObservedRunningTime="2026-01-22 10:49:24.401164235 +0000 UTC m=+1443.631107143" Jan 22 10:49:27 crc kubenswrapper[4752]: I0122 10:49:27.723482 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:49:27 crc kubenswrapper[4752]: I0122 10:49:27.723619 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:49:36 crc kubenswrapper[4752]: I0122 10:49:36.521314 4752 generic.go:334] "Generic (PLEG): container finished" podID="ece83eca-f252-4c41-8f79-7a0dfd13ffb1" containerID="f481377982f19ba95321e947258cd632bf1c4865d6a3d7283575c9e2215ded36" exitCode=0 Jan 22 10:49:36 crc kubenswrapper[4752]: I0122 10:49:36.521429 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt" event={"ID":"ece83eca-f252-4c41-8f79-7a0dfd13ffb1","Type":"ContainerDied","Data":"f481377982f19ba95321e947258cd632bf1c4865d6a3d7283575c9e2215ded36"} Jan 22 10:49:37 crc kubenswrapper[4752]: I0122 10:49:37.988785 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt" Jan 22 10:49:38 crc kubenswrapper[4752]: I0122 10:49:38.132293 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgbgr\" (UniqueName: \"kubernetes.io/projected/ece83eca-f252-4c41-8f79-7a0dfd13ffb1-kube-api-access-fgbgr\") pod \"ece83eca-f252-4c41-8f79-7a0dfd13ffb1\" (UID: \"ece83eca-f252-4c41-8f79-7a0dfd13ffb1\") " Jan 22 10:49:38 crc kubenswrapper[4752]: I0122 10:49:38.132495 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ece83eca-f252-4c41-8f79-7a0dfd13ffb1-repo-setup-combined-ca-bundle\") pod \"ece83eca-f252-4c41-8f79-7a0dfd13ffb1\" (UID: \"ece83eca-f252-4c41-8f79-7a0dfd13ffb1\") " Jan 22 10:49:38 crc kubenswrapper[4752]: I0122 10:49:38.132689 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ece83eca-f252-4c41-8f79-7a0dfd13ffb1-ssh-key-openstack-edpm-ipam\") pod \"ece83eca-f252-4c41-8f79-7a0dfd13ffb1\" (UID: \"ece83eca-f252-4c41-8f79-7a0dfd13ffb1\") " Jan 22 10:49:38 crc kubenswrapper[4752]: I0122 10:49:38.132744 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ece83eca-f252-4c41-8f79-7a0dfd13ffb1-inventory\") pod \"ece83eca-f252-4c41-8f79-7a0dfd13ffb1\" (UID: \"ece83eca-f252-4c41-8f79-7a0dfd13ffb1\") " Jan 22 10:49:38 crc kubenswrapper[4752]: I0122 10:49:38.138675 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ece83eca-f252-4c41-8f79-7a0dfd13ffb1-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "ece83eca-f252-4c41-8f79-7a0dfd13ffb1" (UID: "ece83eca-f252-4c41-8f79-7a0dfd13ffb1"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:49:38 crc kubenswrapper[4752]: I0122 10:49:38.143183 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ece83eca-f252-4c41-8f79-7a0dfd13ffb1-kube-api-access-fgbgr" (OuterVolumeSpecName: "kube-api-access-fgbgr") pod "ece83eca-f252-4c41-8f79-7a0dfd13ffb1" (UID: "ece83eca-f252-4c41-8f79-7a0dfd13ffb1"). InnerVolumeSpecName "kube-api-access-fgbgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:49:38 crc kubenswrapper[4752]: I0122 10:49:38.164504 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ece83eca-f252-4c41-8f79-7a0dfd13ffb1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ece83eca-f252-4c41-8f79-7a0dfd13ffb1" (UID: "ece83eca-f252-4c41-8f79-7a0dfd13ffb1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:49:38 crc kubenswrapper[4752]: I0122 10:49:38.164558 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ece83eca-f252-4c41-8f79-7a0dfd13ffb1-inventory" (OuterVolumeSpecName: "inventory") pod "ece83eca-f252-4c41-8f79-7a0dfd13ffb1" (UID: "ece83eca-f252-4c41-8f79-7a0dfd13ffb1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:49:38 crc kubenswrapper[4752]: I0122 10:49:38.235426 4752 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ece83eca-f252-4c41-8f79-7a0dfd13ffb1-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:49:38 crc kubenswrapper[4752]: I0122 10:49:38.235465 4752 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ece83eca-f252-4c41-8f79-7a0dfd13ffb1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 10:49:38 crc kubenswrapper[4752]: I0122 10:49:38.235477 4752 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ece83eca-f252-4c41-8f79-7a0dfd13ffb1-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 10:49:38 crc kubenswrapper[4752]: I0122 10:49:38.235488 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgbgr\" (UniqueName: \"kubernetes.io/projected/ece83eca-f252-4c41-8f79-7a0dfd13ffb1-kube-api-access-fgbgr\") on node \"crc\" DevicePath \"\"" Jan 22 10:49:38 crc kubenswrapper[4752]: I0122 10:49:38.541673 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt" event={"ID":"ece83eca-f252-4c41-8f79-7a0dfd13ffb1","Type":"ContainerDied","Data":"08aae7c1e1fab8bdbdbd8cedd7658c5af2026cb876e1c5eaa65f0c8c778f5828"} Jan 22 10:49:38 crc kubenswrapper[4752]: I0122 10:49:38.541712 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08aae7c1e1fab8bdbdbd8cedd7658c5af2026cb876e1c5eaa65f0c8c778f5828" Jan 22 10:49:38 crc kubenswrapper[4752]: I0122 10:49:38.541765 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-n2txt" Jan 22 10:49:38 crc kubenswrapper[4752]: I0122 10:49:38.627819 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-k8frh"] Jan 22 10:49:38 crc kubenswrapper[4752]: E0122 10:49:38.628342 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ece83eca-f252-4c41-8f79-7a0dfd13ffb1" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 22 10:49:38 crc kubenswrapper[4752]: I0122 10:49:38.628365 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="ece83eca-f252-4c41-8f79-7a0dfd13ffb1" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 22 10:49:38 crc kubenswrapper[4752]: I0122 10:49:38.628598 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="ece83eca-f252-4c41-8f79-7a0dfd13ffb1" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 22 10:49:38 crc kubenswrapper[4752]: I0122 10:49:38.629230 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-k8frh" Jan 22 10:49:38 crc kubenswrapper[4752]: I0122 10:49:38.646923 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-k8frh"] Jan 22 10:49:38 crc kubenswrapper[4752]: I0122 10:49:38.670182 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 10:49:38 crc kubenswrapper[4752]: I0122 10:49:38.670182 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 10:49:38 crc kubenswrapper[4752]: I0122 10:49:38.670332 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 10:49:38 crc kubenswrapper[4752]: I0122 10:49:38.670355 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j8c6q" Jan 22 10:49:38 crc kubenswrapper[4752]: I0122 10:49:38.749066 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn7nq\" (UniqueName: \"kubernetes.io/projected/03b01f65-5385-41fb-8781-220ede3d7818-kube-api-access-pn7nq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-k8frh\" (UID: \"03b01f65-5385-41fb-8781-220ede3d7818\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-k8frh" Jan 22 10:49:38 crc kubenswrapper[4752]: I0122 10:49:38.749441 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03b01f65-5385-41fb-8781-220ede3d7818-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-k8frh\" (UID: \"03b01f65-5385-41fb-8781-220ede3d7818\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-k8frh" Jan 22 10:49:38 crc kubenswrapper[4752]: I0122 10:49:38.749731 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03b01f65-5385-41fb-8781-220ede3d7818-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-k8frh\" (UID: \"03b01f65-5385-41fb-8781-220ede3d7818\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-k8frh" Jan 22 10:49:38 crc kubenswrapper[4752]: I0122 10:49:38.851823 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03b01f65-5385-41fb-8781-220ede3d7818-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-k8frh\" (UID: \"03b01f65-5385-41fb-8781-220ede3d7818\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-k8frh" Jan 22 10:49:38 crc kubenswrapper[4752]: I0122 10:49:38.852358 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn7nq\" (UniqueName: \"kubernetes.io/projected/03b01f65-5385-41fb-8781-220ede3d7818-kube-api-access-pn7nq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-k8frh\" (UID: \"03b01f65-5385-41fb-8781-220ede3d7818\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-k8frh" Jan 22 10:49:38 crc kubenswrapper[4752]: I0122 10:49:38.852415 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03b01f65-5385-41fb-8781-220ede3d7818-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-k8frh\" (UID: \"03b01f65-5385-41fb-8781-220ede3d7818\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-k8frh" Jan 22 10:49:38 crc kubenswrapper[4752]: I0122 10:49:38.856698 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03b01f65-5385-41fb-8781-220ede3d7818-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-k8frh\" (UID: \"03b01f65-5385-41fb-8781-220ede3d7818\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-k8frh" Jan 22 10:49:38 crc kubenswrapper[4752]: I0122 10:49:38.857087 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03b01f65-5385-41fb-8781-220ede3d7818-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-k8frh\" (UID: \"03b01f65-5385-41fb-8781-220ede3d7818\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-k8frh" Jan 22 10:49:38 crc kubenswrapper[4752]: I0122 10:49:38.875416 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn7nq\" (UniqueName: \"kubernetes.io/projected/03b01f65-5385-41fb-8781-220ede3d7818-kube-api-access-pn7nq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-k8frh\" (UID: \"03b01f65-5385-41fb-8781-220ede3d7818\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-k8frh" Jan 22 10:49:38 crc kubenswrapper[4752]: I0122 10:49:38.983455 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-k8frh" Jan 22 10:49:39 crc kubenswrapper[4752]: I0122 10:49:39.541627 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-k8frh"] Jan 22 10:49:39 crc kubenswrapper[4752]: I0122 10:49:39.553103 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-k8frh" event={"ID":"03b01f65-5385-41fb-8781-220ede3d7818","Type":"ContainerStarted","Data":"8adc5f46de084fb7d9a3fda189dbaa8148d510ca816e419944f90495afecd4b4"} Jan 22 10:49:40 crc kubenswrapper[4752]: I0122 10:49:40.562797 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-k8frh" event={"ID":"03b01f65-5385-41fb-8781-220ede3d7818","Type":"ContainerStarted","Data":"72c32d19c2095a57e2055f01f154f0054e44840becffd8a23bebc8410a1f4731"} Jan 22 10:49:40 crc kubenswrapper[4752]: I0122 10:49:40.592554 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-k8frh" podStartSLOduration=2.168809406 podStartE2EDuration="2.592537148s" podCreationTimestamp="2026-01-22 10:49:38 +0000 UTC" firstStartedPulling="2026-01-22 10:49:39.534951238 +0000 UTC m=+1458.764894146" lastFinishedPulling="2026-01-22 10:49:39.95867894 +0000 UTC m=+1459.188621888" observedRunningTime="2026-01-22 10:49:40.580492304 +0000 UTC m=+1459.810435222" watchObservedRunningTime="2026-01-22 10:49:40.592537148 +0000 UTC m=+1459.822480056" Jan 22 10:49:43 crc kubenswrapper[4752]: I0122 10:49:43.336129 4752 scope.go:117] "RemoveContainer" containerID="1cdce13328688e01b7341da8ed77283903647fcf2bdf11e0cd49eedafbe92ecf" Jan 22 10:49:43 crc kubenswrapper[4752]: I0122 10:49:43.365669 4752 scope.go:117] "RemoveContainer" containerID="2ddac350eb56090abadddec2596487408847b69da6506066f2aff9cd7068cc57" Jan 22 10:49:43 crc kubenswrapper[4752]: I0122 10:49:43.408587 4752 scope.go:117] "RemoveContainer" containerID="995568dee6a7fcd3329dc9f8b98154d1c2411479a918e6f78544486382e2212a" Jan 22 10:49:43 crc kubenswrapper[4752]: I0122 10:49:43.449746 4752 scope.go:117] "RemoveContainer" containerID="7ce3f8a252ee36861a8bb577959e97074508170c52921da8f95ba06cc33b8cf8" Jan 22 10:49:43 crc kubenswrapper[4752]: I0122 10:49:43.548330 4752 scope.go:117] "RemoveContainer" containerID="ddc4ea8cf03a8ec374f0dcc5200d9e2a316ea1abc0dec9358dd822a960e1017b" Jan 22 10:49:43 crc kubenswrapper[4752]: I0122 10:49:43.595202 4752 generic.go:334] "Generic (PLEG): container finished" podID="03b01f65-5385-41fb-8781-220ede3d7818" containerID="72c32d19c2095a57e2055f01f154f0054e44840becffd8a23bebc8410a1f4731" exitCode=0 Jan 22 10:49:43 crc kubenswrapper[4752]: I0122 10:49:43.595273 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-k8frh" event={"ID":"03b01f65-5385-41fb-8781-220ede3d7818","Type":"ContainerDied","Data":"72c32d19c2095a57e2055f01f154f0054e44840becffd8a23bebc8410a1f4731"} Jan 22 10:49:45 crc kubenswrapper[4752]: I0122 10:49:45.142371 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-k8frh" Jan 22 10:49:45 crc kubenswrapper[4752]: I0122 10:49:45.186370 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn7nq\" (UniqueName: \"kubernetes.io/projected/03b01f65-5385-41fb-8781-220ede3d7818-kube-api-access-pn7nq\") pod \"03b01f65-5385-41fb-8781-220ede3d7818\" (UID: \"03b01f65-5385-41fb-8781-220ede3d7818\") " Jan 22 10:49:45 crc kubenswrapper[4752]: I0122 10:49:45.186715 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03b01f65-5385-41fb-8781-220ede3d7818-inventory\") pod \"03b01f65-5385-41fb-8781-220ede3d7818\" (UID: \"03b01f65-5385-41fb-8781-220ede3d7818\") " Jan 22 10:49:45 crc kubenswrapper[4752]: I0122 10:49:45.186846 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03b01f65-5385-41fb-8781-220ede3d7818-ssh-key-openstack-edpm-ipam\") pod \"03b01f65-5385-41fb-8781-220ede3d7818\" (UID: \"03b01f65-5385-41fb-8781-220ede3d7818\") " Jan 22 10:49:45 crc kubenswrapper[4752]: I0122 10:49:45.196264 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03b01f65-5385-41fb-8781-220ede3d7818-kube-api-access-pn7nq" (OuterVolumeSpecName: "kube-api-access-pn7nq") pod "03b01f65-5385-41fb-8781-220ede3d7818" (UID: "03b01f65-5385-41fb-8781-220ede3d7818"). InnerVolumeSpecName "kube-api-access-pn7nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:49:45 crc kubenswrapper[4752]: I0122 10:49:45.227832 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03b01f65-5385-41fb-8781-220ede3d7818-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "03b01f65-5385-41fb-8781-220ede3d7818" (UID: "03b01f65-5385-41fb-8781-220ede3d7818"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:49:45 crc kubenswrapper[4752]: I0122 10:49:45.228765 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03b01f65-5385-41fb-8781-220ede3d7818-inventory" (OuterVolumeSpecName: "inventory") pod "03b01f65-5385-41fb-8781-220ede3d7818" (UID: "03b01f65-5385-41fb-8781-220ede3d7818"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:49:45 crc kubenswrapper[4752]: I0122 10:49:45.296434 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn7nq\" (UniqueName: \"kubernetes.io/projected/03b01f65-5385-41fb-8781-220ede3d7818-kube-api-access-pn7nq\") on node \"crc\" DevicePath \"\"" Jan 22 10:49:45 crc kubenswrapper[4752]: I0122 10:49:45.296483 4752 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03b01f65-5385-41fb-8781-220ede3d7818-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 10:49:45 crc kubenswrapper[4752]: I0122 10:49:45.296498 4752 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03b01f65-5385-41fb-8781-220ede3d7818-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 10:49:45 crc kubenswrapper[4752]: I0122 10:49:45.630880 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-k8frh" event={"ID":"03b01f65-5385-41fb-8781-220ede3d7818","Type":"ContainerDied","Data":"8adc5f46de084fb7d9a3fda189dbaa8148d510ca816e419944f90495afecd4b4"} Jan 22 10:49:45 crc kubenswrapper[4752]: I0122 10:49:45.630924 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8adc5f46de084fb7d9a3fda189dbaa8148d510ca816e419944f90495afecd4b4" Jan 22 10:49:45 crc kubenswrapper[4752]: I0122 10:49:45.630978 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-k8frh" Jan 22 10:49:45 crc kubenswrapper[4752]: I0122 10:49:45.746087 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t"] Jan 22 10:49:45 crc kubenswrapper[4752]: E0122 10:49:45.747742 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03b01f65-5385-41fb-8781-220ede3d7818" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 22 10:49:45 crc kubenswrapper[4752]: I0122 10:49:45.747776 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="03b01f65-5385-41fb-8781-220ede3d7818" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 22 10:49:45 crc kubenswrapper[4752]: I0122 10:49:45.748023 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="03b01f65-5385-41fb-8781-220ede3d7818" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 22 10:49:45 crc kubenswrapper[4752]: I0122 10:49:45.749285 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t" Jan 22 10:49:45 crc kubenswrapper[4752]: I0122 10:49:45.751763 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 10:49:45 crc kubenswrapper[4752]: I0122 10:49:45.752271 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j8c6q" Jan 22 10:49:45 crc kubenswrapper[4752]: I0122 10:49:45.752317 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 10:49:45 crc kubenswrapper[4752]: I0122 10:49:45.752605 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 10:49:45 crc kubenswrapper[4752]: I0122 10:49:45.755808 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t"] Jan 22 10:49:45 crc kubenswrapper[4752]: I0122 10:49:45.815652 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/479a4500-cbcd-46d3-ba8c-5bc07a4bc459-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t\" (UID: \"479a4500-cbcd-46d3-ba8c-5bc07a4bc459\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t" Jan 22 10:49:45 crc kubenswrapper[4752]: I0122 10:49:45.815730 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/479a4500-cbcd-46d3-ba8c-5bc07a4bc459-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t\" (UID: \"479a4500-cbcd-46d3-ba8c-5bc07a4bc459\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t" Jan 22 10:49:45 crc kubenswrapper[4752]: I0122 10:49:45.815801 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/479a4500-cbcd-46d3-ba8c-5bc07a4bc459-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t\" (UID: \"479a4500-cbcd-46d3-ba8c-5bc07a4bc459\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t" Jan 22 10:49:45 crc kubenswrapper[4752]: I0122 10:49:45.815830 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgrrh\" (UniqueName: \"kubernetes.io/projected/479a4500-cbcd-46d3-ba8c-5bc07a4bc459-kube-api-access-sgrrh\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t\" (UID: \"479a4500-cbcd-46d3-ba8c-5bc07a4bc459\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t" Jan 22 10:49:45 crc kubenswrapper[4752]: I0122 10:49:45.918002 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/479a4500-cbcd-46d3-ba8c-5bc07a4bc459-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t\" (UID: \"479a4500-cbcd-46d3-ba8c-5bc07a4bc459\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t" Jan 22 10:49:45 crc kubenswrapper[4752]: I0122 10:49:45.918153 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/479a4500-cbcd-46d3-ba8c-5bc07a4bc459-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t\" (UID: \"479a4500-cbcd-46d3-ba8c-5bc07a4bc459\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t" Jan 22 10:49:45 crc kubenswrapper[4752]: I0122 10:49:45.918195 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgrrh\" (UniqueName: \"kubernetes.io/projected/479a4500-cbcd-46d3-ba8c-5bc07a4bc459-kube-api-access-sgrrh\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t\" (UID: \"479a4500-cbcd-46d3-ba8c-5bc07a4bc459\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t" Jan 22 10:49:45 crc kubenswrapper[4752]: I0122 10:49:45.918308 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/479a4500-cbcd-46d3-ba8c-5bc07a4bc459-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t\" (UID: \"479a4500-cbcd-46d3-ba8c-5bc07a4bc459\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t" Jan 22 10:49:45 crc kubenswrapper[4752]: I0122 10:49:45.923328 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/479a4500-cbcd-46d3-ba8c-5bc07a4bc459-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t\" (UID: \"479a4500-cbcd-46d3-ba8c-5bc07a4bc459\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t" Jan 22 10:49:45 crc kubenswrapper[4752]: I0122 10:49:45.926314 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/479a4500-cbcd-46d3-ba8c-5bc07a4bc459-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t\" (UID: \"479a4500-cbcd-46d3-ba8c-5bc07a4bc459\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t" Jan 22 10:49:45 crc kubenswrapper[4752]: I0122 10:49:45.929611 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/479a4500-cbcd-46d3-ba8c-5bc07a4bc459-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t\" (UID: \"479a4500-cbcd-46d3-ba8c-5bc07a4bc459\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t" Jan 22 10:49:45 crc kubenswrapper[4752]: I0122 10:49:45.936352 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgrrh\" (UniqueName: \"kubernetes.io/projected/479a4500-cbcd-46d3-ba8c-5bc07a4bc459-kube-api-access-sgrrh\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t\" (UID: \"479a4500-cbcd-46d3-ba8c-5bc07a4bc459\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t" Jan 22 10:49:46 crc kubenswrapper[4752]: I0122 10:49:46.081548 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t" Jan 22 10:49:46 crc kubenswrapper[4752]: I0122 10:49:46.622885 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t"] Jan 22 10:49:46 crc kubenswrapper[4752]: I0122 10:49:46.647368 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t" event={"ID":"479a4500-cbcd-46d3-ba8c-5bc07a4bc459","Type":"ContainerStarted","Data":"4a64ebec44e045c1a8fa812f8beb87d6e440fa0e673c24f84c0e73fcd6ce7458"} Jan 22 10:49:47 crc kubenswrapper[4752]: I0122 10:49:47.658708 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t" event={"ID":"479a4500-cbcd-46d3-ba8c-5bc07a4bc459","Type":"ContainerStarted","Data":"61effb0cf5087c7a0535f1267e347b1397bc41401c1d6b11e61587a2bcba94b8"} Jan 22 10:49:47 crc kubenswrapper[4752]: I0122 10:49:47.680431 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t" podStartSLOduration=1.980366082 podStartE2EDuration="2.680412766s" podCreationTimestamp="2026-01-22 10:49:45 +0000 UTC" firstStartedPulling="2026-01-22 10:49:46.628605435 +0000 UTC m=+1465.858548343" lastFinishedPulling="2026-01-22 10:49:47.328652119 +0000 UTC m=+1466.558595027" observedRunningTime="2026-01-22 10:49:47.676622807 +0000 UTC m=+1466.906565715" watchObservedRunningTime="2026-01-22 10:49:47.680412766 +0000 UTC m=+1466.910355674" Jan 22 10:49:57 crc kubenswrapper[4752]: I0122 10:49:57.724156 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:49:57 crc kubenswrapper[4752]: I0122 10:49:57.725038 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:49:57 crc kubenswrapper[4752]: I0122 10:49:57.725100 4752 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 10:49:57 crc kubenswrapper[4752]: I0122 10:49:57.725872 4752 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f52f6d18c41bfe8f36bddff272721d0bfb8924dfca328885232f7fbfd3ac21f"} pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 10:49:57 crc kubenswrapper[4752]: I0122 10:49:57.725936 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" containerID="cri-o://6f52f6d18c41bfe8f36bddff272721d0bfb8924dfca328885232f7fbfd3ac21f" gracePeriod=600 Jan 22 10:49:58 crc kubenswrapper[4752]: I0122 10:49:58.771227 4752 generic.go:334] "Generic (PLEG): container finished" podID="eb8df70c-9474-4827-8831-f39fc6883d79" containerID="6f52f6d18c41bfe8f36bddff272721d0bfb8924dfca328885232f7fbfd3ac21f" exitCode=0 Jan 22 10:49:58 crc kubenswrapper[4752]: I0122 10:49:58.771302 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerDied","Data":"6f52f6d18c41bfe8f36bddff272721d0bfb8924dfca328885232f7fbfd3ac21f"} Jan 22 10:49:58 crc kubenswrapper[4752]: I0122 10:49:58.772805 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerStarted","Data":"95f49b8d3b2a29d6e49e059f56ddb1725ec652d91fd4d10fdf2aa8c455daa877"} Jan 22 10:49:58 crc kubenswrapper[4752]: I0122 10:49:58.772832 4752 scope.go:117] "RemoveContainer" containerID="98fa078dac5ca30a46bf92bf45d8fc8b321a6f93f3d0f79aa40474301ba963e0" Jan 22 10:50:43 crc kubenswrapper[4752]: I0122 10:50:43.726466 4752 scope.go:117] "RemoveContainer" containerID="b613bfe657cd8197b9b8f3a2c193072e9763c90ae4811d451e1e726868704b5d" Jan 22 10:51:21 crc kubenswrapper[4752]: I0122 10:51:21.683435 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5nzhg"] Jan 22 10:51:21 crc kubenswrapper[4752]: I0122 10:51:21.687704 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5nzhg" Jan 22 10:51:21 crc kubenswrapper[4752]: I0122 10:51:21.705084 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5nzhg"] Jan 22 10:51:21 crc kubenswrapper[4752]: I0122 10:51:21.819821 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpqfx\" (UniqueName: \"kubernetes.io/projected/ef911df0-773d-48fe-87c5-d36dd47b7d11-kube-api-access-tpqfx\") pod \"community-operators-5nzhg\" (UID: \"ef911df0-773d-48fe-87c5-d36dd47b7d11\") " pod="openshift-marketplace/community-operators-5nzhg" Jan 22 10:51:21 crc kubenswrapper[4752]: I0122 10:51:21.820108 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef911df0-773d-48fe-87c5-d36dd47b7d11-utilities\") pod \"community-operators-5nzhg\" (UID: \"ef911df0-773d-48fe-87c5-d36dd47b7d11\") " pod="openshift-marketplace/community-operators-5nzhg" Jan 22 10:51:21 crc kubenswrapper[4752]: I0122 10:51:21.820135 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef911df0-773d-48fe-87c5-d36dd47b7d11-catalog-content\") pod \"community-operators-5nzhg\" (UID: \"ef911df0-773d-48fe-87c5-d36dd47b7d11\") " pod="openshift-marketplace/community-operators-5nzhg" Jan 22 10:51:21 crc kubenswrapper[4752]: I0122 10:51:21.921340 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef911df0-773d-48fe-87c5-d36dd47b7d11-catalog-content\") pod \"community-operators-5nzhg\" (UID: \"ef911df0-773d-48fe-87c5-d36dd47b7d11\") " pod="openshift-marketplace/community-operators-5nzhg" Jan 22 10:51:21 crc kubenswrapper[4752]: I0122 10:51:21.921505 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpqfx\" (UniqueName: \"kubernetes.io/projected/ef911df0-773d-48fe-87c5-d36dd47b7d11-kube-api-access-tpqfx\") pod \"community-operators-5nzhg\" (UID: \"ef911df0-773d-48fe-87c5-d36dd47b7d11\") " pod="openshift-marketplace/community-operators-5nzhg" Jan 22 10:51:21 crc kubenswrapper[4752]: I0122 10:51:21.921555 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef911df0-773d-48fe-87c5-d36dd47b7d11-utilities\") pod \"community-operators-5nzhg\" (UID: \"ef911df0-773d-48fe-87c5-d36dd47b7d11\") " pod="openshift-marketplace/community-operators-5nzhg" Jan 22 10:51:21 crc kubenswrapper[4752]: I0122 10:51:21.921783 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef911df0-773d-48fe-87c5-d36dd47b7d11-catalog-content\") pod \"community-operators-5nzhg\" (UID: \"ef911df0-773d-48fe-87c5-d36dd47b7d11\") " pod="openshift-marketplace/community-operators-5nzhg" Jan 22 10:51:21 crc kubenswrapper[4752]: I0122 10:51:21.922177 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef911df0-773d-48fe-87c5-d36dd47b7d11-utilities\") pod \"community-operators-5nzhg\" (UID: \"ef911df0-773d-48fe-87c5-d36dd47b7d11\") " pod="openshift-marketplace/community-operators-5nzhg" Jan 22 10:51:21 crc kubenswrapper[4752]: I0122 10:51:21.940604 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpqfx\" (UniqueName: \"kubernetes.io/projected/ef911df0-773d-48fe-87c5-d36dd47b7d11-kube-api-access-tpqfx\") pod \"community-operators-5nzhg\" (UID: \"ef911df0-773d-48fe-87c5-d36dd47b7d11\") " pod="openshift-marketplace/community-operators-5nzhg" Jan 22 10:51:22 crc kubenswrapper[4752]: I0122 10:51:22.004578 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5nzhg" Jan 22 10:51:22 crc kubenswrapper[4752]: I0122 10:51:22.510183 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5nzhg"] Jan 22 10:51:22 crc kubenswrapper[4752]: I0122 10:51:22.665166 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nzhg" event={"ID":"ef911df0-773d-48fe-87c5-d36dd47b7d11","Type":"ContainerStarted","Data":"93686affe6f85d0e4df1dc5b51027452a2706a3e10f2c861548422779f5e535d"} Jan 22 10:51:23 crc kubenswrapper[4752]: I0122 10:51:23.678004 4752 generic.go:334] "Generic (PLEG): container finished" podID="ef911df0-773d-48fe-87c5-d36dd47b7d11" containerID="13644b338c78bd36387569828b39fae0ff669444cc759242b78c3c895a2fc0e3" exitCode=0 Jan 22 10:51:23 crc kubenswrapper[4752]: I0122 10:51:23.678081 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nzhg" event={"ID":"ef911df0-773d-48fe-87c5-d36dd47b7d11","Type":"ContainerDied","Data":"13644b338c78bd36387569828b39fae0ff669444cc759242b78c3c895a2fc0e3"} Jan 22 10:51:23 crc kubenswrapper[4752]: I0122 10:51:23.680159 4752 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 10:51:25 crc kubenswrapper[4752]: I0122 10:51:25.767549 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nzhg" event={"ID":"ef911df0-773d-48fe-87c5-d36dd47b7d11","Type":"ContainerStarted","Data":"64d6b14043cb9510e0aa7dbbb215cdba4bf0b0c371137fd7acc4e681fbc93867"} Jan 22 10:51:26 crc kubenswrapper[4752]: I0122 10:51:26.784437 4752 generic.go:334] "Generic (PLEG): container finished" podID="ef911df0-773d-48fe-87c5-d36dd47b7d11" containerID="64d6b14043cb9510e0aa7dbbb215cdba4bf0b0c371137fd7acc4e681fbc93867" exitCode=0 Jan 22 10:51:26 crc kubenswrapper[4752]: I0122 10:51:26.784563 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nzhg" event={"ID":"ef911df0-773d-48fe-87c5-d36dd47b7d11","Type":"ContainerDied","Data":"64d6b14043cb9510e0aa7dbbb215cdba4bf0b0c371137fd7acc4e681fbc93867"} Jan 22 10:51:27 crc kubenswrapper[4752]: I0122 10:51:27.798290 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nzhg" event={"ID":"ef911df0-773d-48fe-87c5-d36dd47b7d11","Type":"ContainerStarted","Data":"33c7adc91bec0948020b754c0b2c6ed7e1ffc399a3af85e7af77b3a22ebedce8"} Jan 22 10:51:27 crc kubenswrapper[4752]: I0122 10:51:27.827797 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5nzhg" podStartSLOduration=3.330128612 podStartE2EDuration="6.827775553s" podCreationTimestamp="2026-01-22 10:51:21 +0000 UTC" firstStartedPulling="2026-01-22 10:51:23.679950007 +0000 UTC m=+1562.909892915" lastFinishedPulling="2026-01-22 10:51:27.177596948 +0000 UTC m=+1566.407539856" observedRunningTime="2026-01-22 10:51:27.817934249 +0000 UTC m=+1567.047877157" watchObservedRunningTime="2026-01-22 10:51:27.827775553 +0000 UTC m=+1567.057718481" Jan 22 10:51:32 crc kubenswrapper[4752]: I0122 10:51:32.004697 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5nzhg" Jan 22 10:51:32 crc kubenswrapper[4752]: I0122 10:51:32.005207 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5nzhg" Jan 22 10:51:32 crc kubenswrapper[4752]: I0122 10:51:32.059287 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5nzhg" Jan 22 10:51:32 crc kubenswrapper[4752]: I0122 10:51:32.883298 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5nzhg" Jan 22 10:51:32 crc kubenswrapper[4752]: I0122 10:51:32.936419 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5nzhg"] Jan 22 10:51:34 crc kubenswrapper[4752]: I0122 10:51:34.861064 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5nzhg" podUID="ef911df0-773d-48fe-87c5-d36dd47b7d11" containerName="registry-server" containerID="cri-o://33c7adc91bec0948020b754c0b2c6ed7e1ffc399a3af85e7af77b3a22ebedce8" gracePeriod=2 Jan 22 10:51:35 crc kubenswrapper[4752]: I0122 10:51:35.819255 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5nzhg" Jan 22 10:51:35 crc kubenswrapper[4752]: I0122 10:51:35.885252 4752 generic.go:334] "Generic (PLEG): container finished" podID="ef911df0-773d-48fe-87c5-d36dd47b7d11" containerID="33c7adc91bec0948020b754c0b2c6ed7e1ffc399a3af85e7af77b3a22ebedce8" exitCode=0 Jan 22 10:51:35 crc kubenswrapper[4752]: I0122 10:51:35.885300 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nzhg" event={"ID":"ef911df0-773d-48fe-87c5-d36dd47b7d11","Type":"ContainerDied","Data":"33c7adc91bec0948020b754c0b2c6ed7e1ffc399a3af85e7af77b3a22ebedce8"} Jan 22 10:51:35 crc kubenswrapper[4752]: I0122 10:51:35.885332 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nzhg" event={"ID":"ef911df0-773d-48fe-87c5-d36dd47b7d11","Type":"ContainerDied","Data":"93686affe6f85d0e4df1dc5b51027452a2706a3e10f2c861548422779f5e535d"} Jan 22 10:51:35 crc kubenswrapper[4752]: I0122 10:51:35.885354 4752 scope.go:117] "RemoveContainer" containerID="33c7adc91bec0948020b754c0b2c6ed7e1ffc399a3af85e7af77b3a22ebedce8" Jan 22 10:51:35 crc kubenswrapper[4752]: I0122 10:51:35.885508 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5nzhg" Jan 22 10:51:35 crc kubenswrapper[4752]: I0122 10:51:35.911550 4752 scope.go:117] "RemoveContainer" containerID="64d6b14043cb9510e0aa7dbbb215cdba4bf0b0c371137fd7acc4e681fbc93867" Jan 22 10:51:35 crc kubenswrapper[4752]: I0122 10:51:35.937944 4752 scope.go:117] "RemoveContainer" containerID="13644b338c78bd36387569828b39fae0ff669444cc759242b78c3c895a2fc0e3" Jan 22 10:51:35 crc kubenswrapper[4752]: I0122 10:51:35.983789 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef911df0-773d-48fe-87c5-d36dd47b7d11-catalog-content\") pod \"ef911df0-773d-48fe-87c5-d36dd47b7d11\" (UID: \"ef911df0-773d-48fe-87c5-d36dd47b7d11\") " Jan 22 10:51:35 crc kubenswrapper[4752]: I0122 10:51:35.983911 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpqfx\" (UniqueName: \"kubernetes.io/projected/ef911df0-773d-48fe-87c5-d36dd47b7d11-kube-api-access-tpqfx\") pod \"ef911df0-773d-48fe-87c5-d36dd47b7d11\" (UID: \"ef911df0-773d-48fe-87c5-d36dd47b7d11\") " Jan 22 10:51:35 crc kubenswrapper[4752]: I0122 10:51:35.983969 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef911df0-773d-48fe-87c5-d36dd47b7d11-utilities\") pod \"ef911df0-773d-48fe-87c5-d36dd47b7d11\" (UID: \"ef911df0-773d-48fe-87c5-d36dd47b7d11\") " Jan 22 10:51:35 crc kubenswrapper[4752]: I0122 10:51:35.984725 4752 scope.go:117] "RemoveContainer" containerID="33c7adc91bec0948020b754c0b2c6ed7e1ffc399a3af85e7af77b3a22ebedce8" Jan 22 10:51:35 crc kubenswrapper[4752]: I0122 10:51:35.984942 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef911df0-773d-48fe-87c5-d36dd47b7d11-utilities" (OuterVolumeSpecName: "utilities") pod "ef911df0-773d-48fe-87c5-d36dd47b7d11" (UID: "ef911df0-773d-48fe-87c5-d36dd47b7d11"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:51:35 crc kubenswrapper[4752]: E0122 10:51:35.985113 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33c7adc91bec0948020b754c0b2c6ed7e1ffc399a3af85e7af77b3a22ebedce8\": container with ID starting with 33c7adc91bec0948020b754c0b2c6ed7e1ffc399a3af85e7af77b3a22ebedce8 not found: ID does not exist" containerID="33c7adc91bec0948020b754c0b2c6ed7e1ffc399a3af85e7af77b3a22ebedce8" Jan 22 10:51:35 crc kubenswrapper[4752]: I0122 10:51:35.985147 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33c7adc91bec0948020b754c0b2c6ed7e1ffc399a3af85e7af77b3a22ebedce8"} err="failed to get container status \"33c7adc91bec0948020b754c0b2c6ed7e1ffc399a3af85e7af77b3a22ebedce8\": rpc error: code = NotFound desc = could not find container \"33c7adc91bec0948020b754c0b2c6ed7e1ffc399a3af85e7af77b3a22ebedce8\": container with ID starting with 33c7adc91bec0948020b754c0b2c6ed7e1ffc399a3af85e7af77b3a22ebedce8 not found: ID does not exist" Jan 22 10:51:35 crc kubenswrapper[4752]: I0122 10:51:35.985175 4752 scope.go:117] "RemoveContainer" containerID="64d6b14043cb9510e0aa7dbbb215cdba4bf0b0c371137fd7acc4e681fbc93867" Jan 22 10:51:35 crc kubenswrapper[4752]: E0122 10:51:35.985625 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64d6b14043cb9510e0aa7dbbb215cdba4bf0b0c371137fd7acc4e681fbc93867\": container with ID starting with 64d6b14043cb9510e0aa7dbbb215cdba4bf0b0c371137fd7acc4e681fbc93867 not found: ID does not exist" containerID="64d6b14043cb9510e0aa7dbbb215cdba4bf0b0c371137fd7acc4e681fbc93867" Jan 22 10:51:35 crc kubenswrapper[4752]: I0122 10:51:35.985657 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64d6b14043cb9510e0aa7dbbb215cdba4bf0b0c371137fd7acc4e681fbc93867"} err="failed to get container status \"64d6b14043cb9510e0aa7dbbb215cdba4bf0b0c371137fd7acc4e681fbc93867\": rpc error: code = NotFound desc = could not find container \"64d6b14043cb9510e0aa7dbbb215cdba4bf0b0c371137fd7acc4e681fbc93867\": container with ID starting with 64d6b14043cb9510e0aa7dbbb215cdba4bf0b0c371137fd7acc4e681fbc93867 not found: ID does not exist" Jan 22 10:51:35 crc kubenswrapper[4752]: I0122 10:51:35.985682 4752 scope.go:117] "RemoveContainer" containerID="13644b338c78bd36387569828b39fae0ff669444cc759242b78c3c895a2fc0e3" Jan 22 10:51:35 crc kubenswrapper[4752]: E0122 10:51:35.985993 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13644b338c78bd36387569828b39fae0ff669444cc759242b78c3c895a2fc0e3\": container with ID starting with 13644b338c78bd36387569828b39fae0ff669444cc759242b78c3c895a2fc0e3 not found: ID does not exist" containerID="13644b338c78bd36387569828b39fae0ff669444cc759242b78c3c895a2fc0e3" Jan 22 10:51:35 crc kubenswrapper[4752]: I0122 10:51:35.986023 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13644b338c78bd36387569828b39fae0ff669444cc759242b78c3c895a2fc0e3"} err="failed to get container status \"13644b338c78bd36387569828b39fae0ff669444cc759242b78c3c895a2fc0e3\": rpc error: code = NotFound desc = could not find container \"13644b338c78bd36387569828b39fae0ff669444cc759242b78c3c895a2fc0e3\": container with ID starting with 13644b338c78bd36387569828b39fae0ff669444cc759242b78c3c895a2fc0e3 not found: ID does not exist" Jan 22 10:51:35 crc kubenswrapper[4752]: I0122 10:51:35.989483 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef911df0-773d-48fe-87c5-d36dd47b7d11-kube-api-access-tpqfx" (OuterVolumeSpecName: "kube-api-access-tpqfx") pod "ef911df0-773d-48fe-87c5-d36dd47b7d11" (UID: "ef911df0-773d-48fe-87c5-d36dd47b7d11"). InnerVolumeSpecName "kube-api-access-tpqfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:51:36 crc kubenswrapper[4752]: I0122 10:51:36.039427 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef911df0-773d-48fe-87c5-d36dd47b7d11-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef911df0-773d-48fe-87c5-d36dd47b7d11" (UID: "ef911df0-773d-48fe-87c5-d36dd47b7d11"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:51:36 crc kubenswrapper[4752]: I0122 10:51:36.088151 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef911df0-773d-48fe-87c5-d36dd47b7d11-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:51:36 crc kubenswrapper[4752]: I0122 10:51:36.088194 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpqfx\" (UniqueName: \"kubernetes.io/projected/ef911df0-773d-48fe-87c5-d36dd47b7d11-kube-api-access-tpqfx\") on node \"crc\" DevicePath \"\"" Jan 22 10:51:36 crc kubenswrapper[4752]: I0122 10:51:36.088208 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef911df0-773d-48fe-87c5-d36dd47b7d11-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:51:36 crc kubenswrapper[4752]: I0122 10:51:36.225183 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5nzhg"] Jan 22 10:51:36 crc kubenswrapper[4752]: I0122 10:51:36.238686 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5nzhg"] Jan 22 10:51:37 crc kubenswrapper[4752]: I0122 10:51:37.134898 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef911df0-773d-48fe-87c5-d36dd47b7d11" path="/var/lib/kubelet/pods/ef911df0-773d-48fe-87c5-d36dd47b7d11/volumes" Jan 22 10:51:52 crc kubenswrapper[4752]: I0122 10:51:52.686522 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p4qlg"] Jan 22 10:51:52 crc kubenswrapper[4752]: E0122 10:51:52.687423 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef911df0-773d-48fe-87c5-d36dd47b7d11" containerName="registry-server" Jan 22 10:51:52 crc kubenswrapper[4752]: I0122 10:51:52.687436 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef911df0-773d-48fe-87c5-d36dd47b7d11" containerName="registry-server" Jan 22 10:51:52 crc kubenswrapper[4752]: E0122 10:51:52.687459 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef911df0-773d-48fe-87c5-d36dd47b7d11" containerName="extract-utilities" Jan 22 10:51:52 crc kubenswrapper[4752]: I0122 10:51:52.687466 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef911df0-773d-48fe-87c5-d36dd47b7d11" containerName="extract-utilities" Jan 22 10:51:52 crc kubenswrapper[4752]: E0122 10:51:52.687495 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef911df0-773d-48fe-87c5-d36dd47b7d11" containerName="extract-content" Jan 22 10:51:52 crc kubenswrapper[4752]: I0122 10:51:52.687501 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef911df0-773d-48fe-87c5-d36dd47b7d11" containerName="extract-content" Jan 22 10:51:52 crc kubenswrapper[4752]: I0122 10:51:52.687670 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef911df0-773d-48fe-87c5-d36dd47b7d11" containerName="registry-server" Jan 22 10:51:52 crc kubenswrapper[4752]: I0122 10:51:52.689107 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4qlg" Jan 22 10:51:52 crc kubenswrapper[4752]: I0122 10:51:52.713544 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4qlg"] Jan 22 10:51:52 crc kubenswrapper[4752]: I0122 10:51:52.816983 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqwjt\" (UniqueName: \"kubernetes.io/projected/bc8cc603-de45-4542-afe7-4565f1c0acbe-kube-api-access-bqwjt\") pod \"redhat-marketplace-p4qlg\" (UID: \"bc8cc603-de45-4542-afe7-4565f1c0acbe\") " pod="openshift-marketplace/redhat-marketplace-p4qlg" Jan 22 10:51:52 crc kubenswrapper[4752]: I0122 10:51:52.817090 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc8cc603-de45-4542-afe7-4565f1c0acbe-catalog-content\") pod \"redhat-marketplace-p4qlg\" (UID: \"bc8cc603-de45-4542-afe7-4565f1c0acbe\") " pod="openshift-marketplace/redhat-marketplace-p4qlg" Jan 22 10:51:52 crc kubenswrapper[4752]: I0122 10:51:52.817152 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc8cc603-de45-4542-afe7-4565f1c0acbe-utilities\") pod \"redhat-marketplace-p4qlg\" (UID: \"bc8cc603-de45-4542-afe7-4565f1c0acbe\") " pod="openshift-marketplace/redhat-marketplace-p4qlg" Jan 22 10:51:52 crc kubenswrapper[4752]: I0122 10:51:52.918314 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc8cc603-de45-4542-afe7-4565f1c0acbe-utilities\") pod \"redhat-marketplace-p4qlg\" (UID: \"bc8cc603-de45-4542-afe7-4565f1c0acbe\") " pod="openshift-marketplace/redhat-marketplace-p4qlg" Jan 22 10:51:52 crc kubenswrapper[4752]: I0122 10:51:52.918420 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqwjt\" (UniqueName: \"kubernetes.io/projected/bc8cc603-de45-4542-afe7-4565f1c0acbe-kube-api-access-bqwjt\") pod \"redhat-marketplace-p4qlg\" (UID: \"bc8cc603-de45-4542-afe7-4565f1c0acbe\") " pod="openshift-marketplace/redhat-marketplace-p4qlg" Jan 22 10:51:52 crc kubenswrapper[4752]: I0122 10:51:52.918491 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc8cc603-de45-4542-afe7-4565f1c0acbe-catalog-content\") pod \"redhat-marketplace-p4qlg\" (UID: \"bc8cc603-de45-4542-afe7-4565f1c0acbe\") " pod="openshift-marketplace/redhat-marketplace-p4qlg" Jan 22 10:51:52 crc kubenswrapper[4752]: I0122 10:51:52.918961 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc8cc603-de45-4542-afe7-4565f1c0acbe-catalog-content\") pod \"redhat-marketplace-p4qlg\" (UID: \"bc8cc603-de45-4542-afe7-4565f1c0acbe\") " pod="openshift-marketplace/redhat-marketplace-p4qlg" Jan 22 10:51:52 crc kubenswrapper[4752]: I0122 10:51:52.919099 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc8cc603-de45-4542-afe7-4565f1c0acbe-utilities\") pod \"redhat-marketplace-p4qlg\" (UID: \"bc8cc603-de45-4542-afe7-4565f1c0acbe\") " pod="openshift-marketplace/redhat-marketplace-p4qlg" Jan 22 10:51:52 crc kubenswrapper[4752]: I0122 10:51:52.937293 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqwjt\" (UniqueName: \"kubernetes.io/projected/bc8cc603-de45-4542-afe7-4565f1c0acbe-kube-api-access-bqwjt\") pod \"redhat-marketplace-p4qlg\" (UID: \"bc8cc603-de45-4542-afe7-4565f1c0acbe\") " pod="openshift-marketplace/redhat-marketplace-p4qlg" Jan 22 10:51:53 crc kubenswrapper[4752]: I0122 10:51:53.046213 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4qlg" Jan 22 10:51:53 crc kubenswrapper[4752]: I0122 10:51:53.537979 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4qlg"] Jan 22 10:51:54 crc kubenswrapper[4752]: I0122 10:51:54.060083 4752 generic.go:334] "Generic (PLEG): container finished" podID="bc8cc603-de45-4542-afe7-4565f1c0acbe" containerID="58f0e520756fd8d0f471420deb9f40d3fce6a391f0c2f03f576da9652e84d2ea" exitCode=0 Jan 22 10:51:54 crc kubenswrapper[4752]: I0122 10:51:54.060184 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4qlg" event={"ID":"bc8cc603-de45-4542-afe7-4565f1c0acbe","Type":"ContainerDied","Data":"58f0e520756fd8d0f471420deb9f40d3fce6a391f0c2f03f576da9652e84d2ea"} Jan 22 10:51:54 crc kubenswrapper[4752]: I0122 10:51:54.060392 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4qlg" event={"ID":"bc8cc603-de45-4542-afe7-4565f1c0acbe","Type":"ContainerStarted","Data":"984c993f13822afd75862b5d0210abae10f5a23664eb4dd8668b812018508355"} Jan 22 10:51:55 crc kubenswrapper[4752]: I0122 10:51:55.073197 4752 generic.go:334] "Generic (PLEG): container finished" podID="bc8cc603-de45-4542-afe7-4565f1c0acbe" containerID="57e9b1a6808b682fad825b7323c9791852227bfd3547edf3a36a8417979befe5" exitCode=0 Jan 22 10:51:55 crc kubenswrapper[4752]: I0122 10:51:55.073307 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4qlg" event={"ID":"bc8cc603-de45-4542-afe7-4565f1c0acbe","Type":"ContainerDied","Data":"57e9b1a6808b682fad825b7323c9791852227bfd3547edf3a36a8417979befe5"} Jan 22 10:51:56 crc kubenswrapper[4752]: I0122 10:51:56.088550 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4qlg" event={"ID":"bc8cc603-de45-4542-afe7-4565f1c0acbe","Type":"ContainerStarted","Data":"d0bd4ef376c9ad4adb8b2ed473b814026cda5d9f70aad6311f52e55e7065d53a"} Jan 22 10:51:56 crc kubenswrapper[4752]: I0122 10:51:56.118229 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p4qlg" podStartSLOduration=2.6832239380000003 podStartE2EDuration="4.118208676s" podCreationTimestamp="2026-01-22 10:51:52 +0000 UTC" firstStartedPulling="2026-01-22 10:51:54.061940209 +0000 UTC m=+1593.291883117" lastFinishedPulling="2026-01-22 10:51:55.496924917 +0000 UTC m=+1594.726867855" observedRunningTime="2026-01-22 10:51:56.116129413 +0000 UTC m=+1595.346072331" watchObservedRunningTime="2026-01-22 10:51:56.118208676 +0000 UTC m=+1595.348151584" Jan 22 10:52:03 crc kubenswrapper[4752]: I0122 10:52:03.047117 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p4qlg" Jan 22 10:52:03 crc kubenswrapper[4752]: I0122 10:52:03.047927 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p4qlg" Jan 22 10:52:03 crc kubenswrapper[4752]: I0122 10:52:03.161590 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p4qlg" Jan 22 10:52:03 crc kubenswrapper[4752]: I0122 10:52:03.252386 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p4qlg" Jan 22 10:52:03 crc kubenswrapper[4752]: I0122 10:52:03.414090 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4qlg"] Jan 22 10:52:05 crc kubenswrapper[4752]: I0122 10:52:05.177604 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p4qlg" podUID="bc8cc603-de45-4542-afe7-4565f1c0acbe" containerName="registry-server" containerID="cri-o://d0bd4ef376c9ad4adb8b2ed473b814026cda5d9f70aad6311f52e55e7065d53a" gracePeriod=2 Jan 22 10:52:05 crc kubenswrapper[4752]: I0122 10:52:05.595887 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4qlg" Jan 22 10:52:05 crc kubenswrapper[4752]: I0122 10:52:05.687652 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc8cc603-de45-4542-afe7-4565f1c0acbe-catalog-content\") pod \"bc8cc603-de45-4542-afe7-4565f1c0acbe\" (UID: \"bc8cc603-de45-4542-afe7-4565f1c0acbe\") " Jan 22 10:52:05 crc kubenswrapper[4752]: I0122 10:52:05.687943 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqwjt\" (UniqueName: \"kubernetes.io/projected/bc8cc603-de45-4542-afe7-4565f1c0acbe-kube-api-access-bqwjt\") pod \"bc8cc603-de45-4542-afe7-4565f1c0acbe\" (UID: \"bc8cc603-de45-4542-afe7-4565f1c0acbe\") " Jan 22 10:52:05 crc kubenswrapper[4752]: I0122 10:52:05.687976 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc8cc603-de45-4542-afe7-4565f1c0acbe-utilities\") pod \"bc8cc603-de45-4542-afe7-4565f1c0acbe\" (UID: \"bc8cc603-de45-4542-afe7-4565f1c0acbe\") " Jan 22 10:52:05 crc kubenswrapper[4752]: I0122 10:52:05.689784 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc8cc603-de45-4542-afe7-4565f1c0acbe-utilities" (OuterVolumeSpecName: "utilities") pod "bc8cc603-de45-4542-afe7-4565f1c0acbe" (UID: "bc8cc603-de45-4542-afe7-4565f1c0acbe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:52:05 crc kubenswrapper[4752]: I0122 10:52:05.690795 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc8cc603-de45-4542-afe7-4565f1c0acbe-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:52:05 crc kubenswrapper[4752]: I0122 10:52:05.699286 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc8cc603-de45-4542-afe7-4565f1c0acbe-kube-api-access-bqwjt" (OuterVolumeSpecName: "kube-api-access-bqwjt") pod "bc8cc603-de45-4542-afe7-4565f1c0acbe" (UID: "bc8cc603-de45-4542-afe7-4565f1c0acbe"). InnerVolumeSpecName "kube-api-access-bqwjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:52:05 crc kubenswrapper[4752]: I0122 10:52:05.722951 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc8cc603-de45-4542-afe7-4565f1c0acbe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc8cc603-de45-4542-afe7-4565f1c0acbe" (UID: "bc8cc603-de45-4542-afe7-4565f1c0acbe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:52:05 crc kubenswrapper[4752]: I0122 10:52:05.792226 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc8cc603-de45-4542-afe7-4565f1c0acbe-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:52:05 crc kubenswrapper[4752]: I0122 10:52:05.792262 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqwjt\" (UniqueName: \"kubernetes.io/projected/bc8cc603-de45-4542-afe7-4565f1c0acbe-kube-api-access-bqwjt\") on node \"crc\" DevicePath \"\"" Jan 22 10:52:06 crc kubenswrapper[4752]: I0122 10:52:06.189744 4752 generic.go:334] "Generic (PLEG): container finished" podID="bc8cc603-de45-4542-afe7-4565f1c0acbe" containerID="d0bd4ef376c9ad4adb8b2ed473b814026cda5d9f70aad6311f52e55e7065d53a" exitCode=0 Jan 22 10:52:06 crc kubenswrapper[4752]: I0122 10:52:06.189790 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4qlg" event={"ID":"bc8cc603-de45-4542-afe7-4565f1c0acbe","Type":"ContainerDied","Data":"d0bd4ef376c9ad4adb8b2ed473b814026cda5d9f70aad6311f52e55e7065d53a"} Jan 22 10:52:06 crc kubenswrapper[4752]: I0122 10:52:06.189819 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4qlg" event={"ID":"bc8cc603-de45-4542-afe7-4565f1c0acbe","Type":"ContainerDied","Data":"984c993f13822afd75862b5d0210abae10f5a23664eb4dd8668b812018508355"} Jan 22 10:52:06 crc kubenswrapper[4752]: I0122 10:52:06.189838 4752 scope.go:117] "RemoveContainer" containerID="d0bd4ef376c9ad4adb8b2ed473b814026cda5d9f70aad6311f52e55e7065d53a" Jan 22 10:52:06 crc kubenswrapper[4752]: I0122 10:52:06.189877 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4qlg" Jan 22 10:52:06 crc kubenswrapper[4752]: I0122 10:52:06.232687 4752 scope.go:117] "RemoveContainer" containerID="57e9b1a6808b682fad825b7323c9791852227bfd3547edf3a36a8417979befe5" Jan 22 10:52:06 crc kubenswrapper[4752]: I0122 10:52:06.257041 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4qlg"] Jan 22 10:52:06 crc kubenswrapper[4752]: I0122 10:52:06.266713 4752 scope.go:117] "RemoveContainer" containerID="58f0e520756fd8d0f471420deb9f40d3fce6a391f0c2f03f576da9652e84d2ea" Jan 22 10:52:06 crc kubenswrapper[4752]: I0122 10:52:06.269591 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4qlg"] Jan 22 10:52:06 crc kubenswrapper[4752]: I0122 10:52:06.304432 4752 scope.go:117] "RemoveContainer" containerID="d0bd4ef376c9ad4adb8b2ed473b814026cda5d9f70aad6311f52e55e7065d53a" Jan 22 10:52:06 crc kubenswrapper[4752]: E0122 10:52:06.305967 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0bd4ef376c9ad4adb8b2ed473b814026cda5d9f70aad6311f52e55e7065d53a\": container with ID starting with d0bd4ef376c9ad4adb8b2ed473b814026cda5d9f70aad6311f52e55e7065d53a not found: ID does not exist" containerID="d0bd4ef376c9ad4adb8b2ed473b814026cda5d9f70aad6311f52e55e7065d53a" Jan 22 10:52:06 crc kubenswrapper[4752]: I0122 10:52:06.306017 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0bd4ef376c9ad4adb8b2ed473b814026cda5d9f70aad6311f52e55e7065d53a"} err="failed to get container status \"d0bd4ef376c9ad4adb8b2ed473b814026cda5d9f70aad6311f52e55e7065d53a\": rpc error: code = NotFound desc = could not find container \"d0bd4ef376c9ad4adb8b2ed473b814026cda5d9f70aad6311f52e55e7065d53a\": container with ID starting with d0bd4ef376c9ad4adb8b2ed473b814026cda5d9f70aad6311f52e55e7065d53a not found: ID does not exist" Jan 22 10:52:06 crc kubenswrapper[4752]: I0122 10:52:06.306046 4752 scope.go:117] "RemoveContainer" containerID="57e9b1a6808b682fad825b7323c9791852227bfd3547edf3a36a8417979befe5" Jan 22 10:52:06 crc kubenswrapper[4752]: E0122 10:52:06.306665 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57e9b1a6808b682fad825b7323c9791852227bfd3547edf3a36a8417979befe5\": container with ID starting with 57e9b1a6808b682fad825b7323c9791852227bfd3547edf3a36a8417979befe5 not found: ID does not exist" containerID="57e9b1a6808b682fad825b7323c9791852227bfd3547edf3a36a8417979befe5" Jan 22 10:52:06 crc kubenswrapper[4752]: I0122 10:52:06.306695 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57e9b1a6808b682fad825b7323c9791852227bfd3547edf3a36a8417979befe5"} err="failed to get container status \"57e9b1a6808b682fad825b7323c9791852227bfd3547edf3a36a8417979befe5\": rpc error: code = NotFound desc = could not find container \"57e9b1a6808b682fad825b7323c9791852227bfd3547edf3a36a8417979befe5\": container with ID starting with 57e9b1a6808b682fad825b7323c9791852227bfd3547edf3a36a8417979befe5 not found: ID does not exist" Jan 22 10:52:06 crc kubenswrapper[4752]: I0122 10:52:06.306710 4752 scope.go:117] "RemoveContainer" containerID="58f0e520756fd8d0f471420deb9f40d3fce6a391f0c2f03f576da9652e84d2ea" Jan 22 10:52:06 crc kubenswrapper[4752]: E0122 10:52:06.307369 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58f0e520756fd8d0f471420deb9f40d3fce6a391f0c2f03f576da9652e84d2ea\": container with ID starting with 58f0e520756fd8d0f471420deb9f40d3fce6a391f0c2f03f576da9652e84d2ea not found: ID does not exist" containerID="58f0e520756fd8d0f471420deb9f40d3fce6a391f0c2f03f576da9652e84d2ea" Jan 22 10:52:06 crc kubenswrapper[4752]: I0122 10:52:06.307401 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58f0e520756fd8d0f471420deb9f40d3fce6a391f0c2f03f576da9652e84d2ea"} err="failed to get container status \"58f0e520756fd8d0f471420deb9f40d3fce6a391f0c2f03f576da9652e84d2ea\": rpc error: code = NotFound desc = could not find container \"58f0e520756fd8d0f471420deb9f40d3fce6a391f0c2f03f576da9652e84d2ea\": container with ID starting with 58f0e520756fd8d0f471420deb9f40d3fce6a391f0c2f03f576da9652e84d2ea not found: ID does not exist" Jan 22 10:52:07 crc kubenswrapper[4752]: I0122 10:52:07.114109 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc8cc603-de45-4542-afe7-4565f1c0acbe" path="/var/lib/kubelet/pods/bc8cc603-de45-4542-afe7-4565f1c0acbe/volumes" Jan 22 10:52:27 crc kubenswrapper[4752]: I0122 10:52:27.723899 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:52:27 crc kubenswrapper[4752]: I0122 10:52:27.724502 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:52:36 crc kubenswrapper[4752]: I0122 10:52:36.890309 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dftxm"] Jan 22 10:52:36 crc kubenswrapper[4752]: E0122 10:52:36.892371 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc8cc603-de45-4542-afe7-4565f1c0acbe" containerName="extract-content" Jan 22 10:52:36 crc kubenswrapper[4752]: I0122 10:52:36.892386 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc8cc603-de45-4542-afe7-4565f1c0acbe" containerName="extract-content" Jan 22 10:52:36 crc kubenswrapper[4752]: E0122 10:52:36.892405 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc8cc603-de45-4542-afe7-4565f1c0acbe" containerName="registry-server" Jan 22 10:52:36 crc kubenswrapper[4752]: I0122 10:52:36.892429 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc8cc603-de45-4542-afe7-4565f1c0acbe" containerName="registry-server" Jan 22 10:52:36 crc kubenswrapper[4752]: E0122 10:52:36.892450 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc8cc603-de45-4542-afe7-4565f1c0acbe" containerName="extract-utilities" Jan 22 10:52:36 crc kubenswrapper[4752]: I0122 10:52:36.892457 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc8cc603-de45-4542-afe7-4565f1c0acbe" containerName="extract-utilities" Jan 22 10:52:36 crc kubenswrapper[4752]: I0122 10:52:36.892755 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc8cc603-de45-4542-afe7-4565f1c0acbe" containerName="registry-server" Jan 22 10:52:36 crc kubenswrapper[4752]: I0122 10:52:36.894919 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dftxm" Jan 22 10:52:36 crc kubenswrapper[4752]: I0122 10:52:36.924409 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dftxm"] Jan 22 10:52:36 crc kubenswrapper[4752]: I0122 10:52:36.993009 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wppjv\" (UniqueName: \"kubernetes.io/projected/16347cd1-0e60-44f2-a87e-7db4e65d21d9-kube-api-access-wppjv\") pod \"certified-operators-dftxm\" (UID: \"16347cd1-0e60-44f2-a87e-7db4e65d21d9\") " pod="openshift-marketplace/certified-operators-dftxm" Jan 22 10:52:36 crc kubenswrapper[4752]: I0122 10:52:36.993154 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16347cd1-0e60-44f2-a87e-7db4e65d21d9-catalog-content\") pod \"certified-operators-dftxm\" (UID: \"16347cd1-0e60-44f2-a87e-7db4e65d21d9\") " pod="openshift-marketplace/certified-operators-dftxm" Jan 22 10:52:36 crc kubenswrapper[4752]: I0122 10:52:36.993394 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16347cd1-0e60-44f2-a87e-7db4e65d21d9-utilities\") pod \"certified-operators-dftxm\" (UID: \"16347cd1-0e60-44f2-a87e-7db4e65d21d9\") " pod="openshift-marketplace/certified-operators-dftxm" Jan 22 10:52:37 crc kubenswrapper[4752]: I0122 10:52:37.095483 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wppjv\" (UniqueName: \"kubernetes.io/projected/16347cd1-0e60-44f2-a87e-7db4e65d21d9-kube-api-access-wppjv\") pod \"certified-operators-dftxm\" (UID: \"16347cd1-0e60-44f2-a87e-7db4e65d21d9\") " pod="openshift-marketplace/certified-operators-dftxm" Jan 22 10:52:37 crc kubenswrapper[4752]: I0122 10:52:37.095636 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16347cd1-0e60-44f2-a87e-7db4e65d21d9-catalog-content\") pod \"certified-operators-dftxm\" (UID: \"16347cd1-0e60-44f2-a87e-7db4e65d21d9\") " pod="openshift-marketplace/certified-operators-dftxm" Jan 22 10:52:37 crc kubenswrapper[4752]: I0122 10:52:37.095699 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16347cd1-0e60-44f2-a87e-7db4e65d21d9-utilities\") pod \"certified-operators-dftxm\" (UID: \"16347cd1-0e60-44f2-a87e-7db4e65d21d9\") " pod="openshift-marketplace/certified-operators-dftxm" Jan 22 10:52:37 crc kubenswrapper[4752]: I0122 10:52:37.096451 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16347cd1-0e60-44f2-a87e-7db4e65d21d9-utilities\") pod \"certified-operators-dftxm\" (UID: \"16347cd1-0e60-44f2-a87e-7db4e65d21d9\") " pod="openshift-marketplace/certified-operators-dftxm" Jan 22 10:52:37 crc kubenswrapper[4752]: I0122 10:52:37.096481 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16347cd1-0e60-44f2-a87e-7db4e65d21d9-catalog-content\") pod \"certified-operators-dftxm\" (UID: \"16347cd1-0e60-44f2-a87e-7db4e65d21d9\") " pod="openshift-marketplace/certified-operators-dftxm" Jan 22 10:52:37 crc kubenswrapper[4752]: I0122 10:52:37.116292 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wppjv\" (UniqueName: \"kubernetes.io/projected/16347cd1-0e60-44f2-a87e-7db4e65d21d9-kube-api-access-wppjv\") pod \"certified-operators-dftxm\" (UID: \"16347cd1-0e60-44f2-a87e-7db4e65d21d9\") " pod="openshift-marketplace/certified-operators-dftxm" Jan 22 10:52:37 crc kubenswrapper[4752]: I0122 10:52:37.229688 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dftxm" Jan 22 10:52:37 crc kubenswrapper[4752]: I0122 10:52:37.788511 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dftxm"] Jan 22 10:52:38 crc kubenswrapper[4752]: I0122 10:52:38.576423 4752 generic.go:334] "Generic (PLEG): container finished" podID="16347cd1-0e60-44f2-a87e-7db4e65d21d9" containerID="1e5d9e8dee4da224325a4f501736846e5eb585b90fb34e06a5368d4f3533eb59" exitCode=0 Jan 22 10:52:38 crc kubenswrapper[4752]: I0122 10:52:38.576545 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dftxm" event={"ID":"16347cd1-0e60-44f2-a87e-7db4e65d21d9","Type":"ContainerDied","Data":"1e5d9e8dee4da224325a4f501736846e5eb585b90fb34e06a5368d4f3533eb59"} Jan 22 10:52:38 crc kubenswrapper[4752]: I0122 10:52:38.576748 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dftxm" event={"ID":"16347cd1-0e60-44f2-a87e-7db4e65d21d9","Type":"ContainerStarted","Data":"09dcd1a3ce1f9cb6b7f802722dd260a44f84faad725f3618608be100adb9a2a9"} Jan 22 10:52:40 crc kubenswrapper[4752]: I0122 10:52:40.597677 4752 generic.go:334] "Generic (PLEG): container finished" podID="16347cd1-0e60-44f2-a87e-7db4e65d21d9" containerID="ce67e36acf2b3c851a67f265fd42c3c1c574290c48f96bc3d32513158a486df2" exitCode=0 Jan 22 10:52:40 crc kubenswrapper[4752]: I0122 10:52:40.597787 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dftxm" event={"ID":"16347cd1-0e60-44f2-a87e-7db4e65d21d9","Type":"ContainerDied","Data":"ce67e36acf2b3c851a67f265fd42c3c1c574290c48f96bc3d32513158a486df2"} Jan 22 10:52:41 crc kubenswrapper[4752]: I0122 10:52:41.624625 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dftxm" event={"ID":"16347cd1-0e60-44f2-a87e-7db4e65d21d9","Type":"ContainerStarted","Data":"f9425868b7a03962309cdcc34fffea1e323a9d758cc0c380fbc9922062a67de5"} Jan 22 10:52:41 crc kubenswrapper[4752]: I0122 10:52:41.651915 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dftxm" podStartSLOduration=3.126004183 podStartE2EDuration="5.651838586s" podCreationTimestamp="2026-01-22 10:52:36 +0000 UTC" firstStartedPulling="2026-01-22 10:52:38.593080665 +0000 UTC m=+1637.823023613" lastFinishedPulling="2026-01-22 10:52:41.118915108 +0000 UTC m=+1640.348858016" observedRunningTime="2026-01-22 10:52:41.640885801 +0000 UTC m=+1640.870828729" watchObservedRunningTime="2026-01-22 10:52:41.651838586 +0000 UTC m=+1640.881781494" Jan 22 10:52:43 crc kubenswrapper[4752]: I0122 10:52:43.853906 4752 scope.go:117] "RemoveContainer" containerID="fd3e24a10a6bfb838078527b5b1dd225ae1a9a397f8ccef372850997ee864fa7" Jan 22 10:52:43 crc kubenswrapper[4752]: I0122 10:52:43.881579 4752 scope.go:117] "RemoveContainer" containerID="6ba9d12dd4e7fff64cc1425154dacc1044aa90a5323cf91059223a0050677f08" Jan 22 10:52:43 crc kubenswrapper[4752]: I0122 10:52:43.906600 4752 scope.go:117] "RemoveContainer" containerID="59f4abc7c47799056a58e9e7248cf318cb65c55960ada283aa95448f3961194e" Jan 22 10:52:43 crc kubenswrapper[4752]: I0122 10:52:43.934572 4752 scope.go:117] "RemoveContainer" containerID="17bceccaefb6658ffd9ef20d6a216e6c5384511375aa2689962331fd36a4e9a5" Jan 22 10:52:47 crc kubenswrapper[4752]: I0122 10:52:47.230530 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dftxm" Jan 22 10:52:47 crc kubenswrapper[4752]: I0122 10:52:47.231277 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dftxm" Jan 22 10:52:47 crc kubenswrapper[4752]: I0122 10:52:47.283606 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dftxm" Jan 22 10:52:47 crc kubenswrapper[4752]: I0122 10:52:47.750721 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dftxm" Jan 22 10:52:47 crc kubenswrapper[4752]: I0122 10:52:47.890020 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dftxm"] Jan 22 10:52:49 crc kubenswrapper[4752]: I0122 10:52:49.717625 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dftxm" podUID="16347cd1-0e60-44f2-a87e-7db4e65d21d9" containerName="registry-server" containerID="cri-o://f9425868b7a03962309cdcc34fffea1e323a9d758cc0c380fbc9922062a67de5" gracePeriod=2 Jan 22 10:52:50 crc kubenswrapper[4752]: I0122 10:52:50.244866 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dftxm" Jan 22 10:52:50 crc kubenswrapper[4752]: I0122 10:52:50.438438 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16347cd1-0e60-44f2-a87e-7db4e65d21d9-catalog-content\") pod \"16347cd1-0e60-44f2-a87e-7db4e65d21d9\" (UID: \"16347cd1-0e60-44f2-a87e-7db4e65d21d9\") " Jan 22 10:52:50 crc kubenswrapper[4752]: I0122 10:52:50.438577 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wppjv\" (UniqueName: \"kubernetes.io/projected/16347cd1-0e60-44f2-a87e-7db4e65d21d9-kube-api-access-wppjv\") pod \"16347cd1-0e60-44f2-a87e-7db4e65d21d9\" (UID: \"16347cd1-0e60-44f2-a87e-7db4e65d21d9\") " Jan 22 10:52:50 crc kubenswrapper[4752]: I0122 10:52:50.438662 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16347cd1-0e60-44f2-a87e-7db4e65d21d9-utilities\") pod \"16347cd1-0e60-44f2-a87e-7db4e65d21d9\" (UID: \"16347cd1-0e60-44f2-a87e-7db4e65d21d9\") " Jan 22 10:52:50 crc kubenswrapper[4752]: I0122 10:52:50.439672 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16347cd1-0e60-44f2-a87e-7db4e65d21d9-utilities" (OuterVolumeSpecName: "utilities") pod "16347cd1-0e60-44f2-a87e-7db4e65d21d9" (UID: "16347cd1-0e60-44f2-a87e-7db4e65d21d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:52:50 crc kubenswrapper[4752]: I0122 10:52:50.446356 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16347cd1-0e60-44f2-a87e-7db4e65d21d9-kube-api-access-wppjv" (OuterVolumeSpecName: "kube-api-access-wppjv") pod "16347cd1-0e60-44f2-a87e-7db4e65d21d9" (UID: "16347cd1-0e60-44f2-a87e-7db4e65d21d9"). InnerVolumeSpecName "kube-api-access-wppjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:52:50 crc kubenswrapper[4752]: I0122 10:52:50.494612 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16347cd1-0e60-44f2-a87e-7db4e65d21d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "16347cd1-0e60-44f2-a87e-7db4e65d21d9" (UID: "16347cd1-0e60-44f2-a87e-7db4e65d21d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:52:50 crc kubenswrapper[4752]: I0122 10:52:50.540981 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16347cd1-0e60-44f2-a87e-7db4e65d21d9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:52:50 crc kubenswrapper[4752]: I0122 10:52:50.541015 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wppjv\" (UniqueName: \"kubernetes.io/projected/16347cd1-0e60-44f2-a87e-7db4e65d21d9-kube-api-access-wppjv\") on node \"crc\" DevicePath \"\"" Jan 22 10:52:50 crc kubenswrapper[4752]: I0122 10:52:50.541029 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16347cd1-0e60-44f2-a87e-7db4e65d21d9-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:52:50 crc kubenswrapper[4752]: I0122 10:52:50.733593 4752 generic.go:334] "Generic (PLEG): container finished" podID="16347cd1-0e60-44f2-a87e-7db4e65d21d9" containerID="f9425868b7a03962309cdcc34fffea1e323a9d758cc0c380fbc9922062a67de5" exitCode=0 Jan 22 10:52:50 crc kubenswrapper[4752]: I0122 10:52:50.733639 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dftxm" event={"ID":"16347cd1-0e60-44f2-a87e-7db4e65d21d9","Type":"ContainerDied","Data":"f9425868b7a03962309cdcc34fffea1e323a9d758cc0c380fbc9922062a67de5"} Jan 22 10:52:50 crc kubenswrapper[4752]: I0122 10:52:50.733668 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dftxm" event={"ID":"16347cd1-0e60-44f2-a87e-7db4e65d21d9","Type":"ContainerDied","Data":"09dcd1a3ce1f9cb6b7f802722dd260a44f84faad725f3618608be100adb9a2a9"} Jan 22 10:52:50 crc kubenswrapper[4752]: I0122 10:52:50.733685 4752 scope.go:117] "RemoveContainer" containerID="f9425868b7a03962309cdcc34fffea1e323a9d758cc0c380fbc9922062a67de5" Jan 22 10:52:50 crc kubenswrapper[4752]: I0122 10:52:50.733697 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dftxm" Jan 22 10:52:50 crc kubenswrapper[4752]: I0122 10:52:50.764619 4752 scope.go:117] "RemoveContainer" containerID="ce67e36acf2b3c851a67f265fd42c3c1c574290c48f96bc3d32513158a486df2" Jan 22 10:52:50 crc kubenswrapper[4752]: I0122 10:52:50.772072 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dftxm"] Jan 22 10:52:50 crc kubenswrapper[4752]: I0122 10:52:50.781128 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dftxm"] Jan 22 10:52:50 crc kubenswrapper[4752]: I0122 10:52:50.812834 4752 scope.go:117] "RemoveContainer" containerID="1e5d9e8dee4da224325a4f501736846e5eb585b90fb34e06a5368d4f3533eb59" Jan 22 10:52:50 crc kubenswrapper[4752]: I0122 10:52:50.835487 4752 scope.go:117] "RemoveContainer" containerID="f9425868b7a03962309cdcc34fffea1e323a9d758cc0c380fbc9922062a67de5" Jan 22 10:52:50 crc kubenswrapper[4752]: E0122 10:52:50.835944 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9425868b7a03962309cdcc34fffea1e323a9d758cc0c380fbc9922062a67de5\": container with ID starting with f9425868b7a03962309cdcc34fffea1e323a9d758cc0c380fbc9922062a67de5 not found: ID does not exist" containerID="f9425868b7a03962309cdcc34fffea1e323a9d758cc0c380fbc9922062a67de5" Jan 22 10:52:50 crc kubenswrapper[4752]: I0122 10:52:50.835985 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9425868b7a03962309cdcc34fffea1e323a9d758cc0c380fbc9922062a67de5"} err="failed to get container status \"f9425868b7a03962309cdcc34fffea1e323a9d758cc0c380fbc9922062a67de5\": rpc error: code = NotFound desc = could not find container \"f9425868b7a03962309cdcc34fffea1e323a9d758cc0c380fbc9922062a67de5\": container with ID starting with f9425868b7a03962309cdcc34fffea1e323a9d758cc0c380fbc9922062a67de5 not found: ID does not exist" Jan 22 10:52:50 crc kubenswrapper[4752]: I0122 10:52:50.836010 4752 scope.go:117] "RemoveContainer" containerID="ce67e36acf2b3c851a67f265fd42c3c1c574290c48f96bc3d32513158a486df2" Jan 22 10:52:50 crc kubenswrapper[4752]: E0122 10:52:50.837373 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce67e36acf2b3c851a67f265fd42c3c1c574290c48f96bc3d32513158a486df2\": container with ID starting with ce67e36acf2b3c851a67f265fd42c3c1c574290c48f96bc3d32513158a486df2 not found: ID does not exist" containerID="ce67e36acf2b3c851a67f265fd42c3c1c574290c48f96bc3d32513158a486df2" Jan 22 10:52:50 crc kubenswrapper[4752]: I0122 10:52:50.837429 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce67e36acf2b3c851a67f265fd42c3c1c574290c48f96bc3d32513158a486df2"} err="failed to get container status \"ce67e36acf2b3c851a67f265fd42c3c1c574290c48f96bc3d32513158a486df2\": rpc error: code = NotFound desc = could not find container \"ce67e36acf2b3c851a67f265fd42c3c1c574290c48f96bc3d32513158a486df2\": container with ID starting with ce67e36acf2b3c851a67f265fd42c3c1c574290c48f96bc3d32513158a486df2 not found: ID does not exist" Jan 22 10:52:50 crc kubenswrapper[4752]: I0122 10:52:50.837461 4752 scope.go:117] "RemoveContainer" containerID="1e5d9e8dee4da224325a4f501736846e5eb585b90fb34e06a5368d4f3533eb59" Jan 22 10:52:50 crc kubenswrapper[4752]: E0122 10:52:50.837949 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e5d9e8dee4da224325a4f501736846e5eb585b90fb34e06a5368d4f3533eb59\": container with ID starting with 1e5d9e8dee4da224325a4f501736846e5eb585b90fb34e06a5368d4f3533eb59 not found: ID does not exist" containerID="1e5d9e8dee4da224325a4f501736846e5eb585b90fb34e06a5368d4f3533eb59" Jan 22 10:52:50 crc kubenswrapper[4752]: I0122 10:52:50.838025 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e5d9e8dee4da224325a4f501736846e5eb585b90fb34e06a5368d4f3533eb59"} err="failed to get container status \"1e5d9e8dee4da224325a4f501736846e5eb585b90fb34e06a5368d4f3533eb59\": rpc error: code = NotFound desc = could not find container \"1e5d9e8dee4da224325a4f501736846e5eb585b90fb34e06a5368d4f3533eb59\": container with ID starting with 1e5d9e8dee4da224325a4f501736846e5eb585b90fb34e06a5368d4f3533eb59 not found: ID does not exist" Jan 22 10:52:51 crc kubenswrapper[4752]: I0122 10:52:51.111104 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16347cd1-0e60-44f2-a87e-7db4e65d21d9" path="/var/lib/kubelet/pods/16347cd1-0e60-44f2-a87e-7db4e65d21d9/volumes" Jan 22 10:52:57 crc kubenswrapper[4752]: I0122 10:52:57.060145 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-rw8mj"] Jan 22 10:52:57 crc kubenswrapper[4752]: I0122 10:52:57.071442 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-rw8mj"] Jan 22 10:52:57 crc kubenswrapper[4752]: I0122 10:52:57.110824 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c78780d-8ad0-45e5-86f1-c3cb7beccf0e" path="/var/lib/kubelet/pods/7c78780d-8ad0-45e5-86f1-c3cb7beccf0e/volumes" Jan 22 10:52:57 crc kubenswrapper[4752]: I0122 10:52:57.723972 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:52:57 crc kubenswrapper[4752]: I0122 10:52:57.724380 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:52:58 crc kubenswrapper[4752]: I0122 10:52:58.046553 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-67e9-account-create-update-8cjp4"] Jan 22 10:52:58 crc kubenswrapper[4752]: I0122 10:52:58.059482 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-0920-account-create-update-jpx4w"] Jan 22 10:52:58 crc kubenswrapper[4752]: I0122 10:52:58.071975 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-0920-account-create-update-jpx4w"] Jan 22 10:52:58 crc kubenswrapper[4752]: I0122 10:52:58.083136 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-67e9-account-create-update-8cjp4"] Jan 22 10:52:59 crc kubenswrapper[4752]: I0122 10:52:59.036085 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-hjjs9"] Jan 22 10:52:59 crc kubenswrapper[4752]: I0122 10:52:59.049310 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-74ba-account-create-update-8hmpj"] Jan 22 10:52:59 crc kubenswrapper[4752]: I0122 10:52:59.065746 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-hjjs9"] Jan 22 10:52:59 crc kubenswrapper[4752]: I0122 10:52:59.075773 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-74ba-account-create-update-8hmpj"] Jan 22 10:52:59 crc kubenswrapper[4752]: I0122 10:52:59.084178 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-rr8lq"] Jan 22 10:52:59 crc kubenswrapper[4752]: I0122 10:52:59.092234 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-rr8lq"] Jan 22 10:52:59 crc kubenswrapper[4752]: I0122 10:52:59.108271 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4" path="/var/lib/kubelet/pods/3a7e3fbf-6502-4604-bce5-72c2b5f5c1e4/volumes" Jan 22 10:52:59 crc kubenswrapper[4752]: I0122 10:52:59.108840 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ff0ab31-0a6b-45a4-8a4d-484c75853276" path="/var/lib/kubelet/pods/5ff0ab31-0a6b-45a4-8a4d-484c75853276/volumes" Jan 22 10:52:59 crc kubenswrapper[4752]: I0122 10:52:59.109356 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b2dfe4d-751e-4896-9712-035e127f29ca" path="/var/lib/kubelet/pods/6b2dfe4d-751e-4896-9712-035e127f29ca/volumes" Jan 22 10:52:59 crc kubenswrapper[4752]: I0122 10:52:59.110238 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6de9060-b65e-43cc-b492-e19b84135efb" path="/var/lib/kubelet/pods/b6de9060-b65e-43cc-b492-e19b84135efb/volumes" Jan 22 10:52:59 crc kubenswrapper[4752]: I0122 10:52:59.111203 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd285c5e-d4b4-4402-97b9-aff12576faef" path="/var/lib/kubelet/pods/bd285c5e-d4b4-4402-97b9-aff12576faef/volumes" Jan 22 10:53:23 crc kubenswrapper[4752]: I0122 10:53:23.101394 4752 generic.go:334] "Generic (PLEG): container finished" podID="479a4500-cbcd-46d3-ba8c-5bc07a4bc459" containerID="61effb0cf5087c7a0535f1267e347b1397bc41401c1d6b11e61587a2bcba94b8" exitCode=0 Jan 22 10:53:23 crc kubenswrapper[4752]: I0122 10:53:23.128890 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t" event={"ID":"479a4500-cbcd-46d3-ba8c-5bc07a4bc459","Type":"ContainerDied","Data":"61effb0cf5087c7a0535f1267e347b1397bc41401c1d6b11e61587a2bcba94b8"} Jan 22 10:53:24 crc kubenswrapper[4752]: I0122 10:53:24.530413 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t" Jan 22 10:53:24 crc kubenswrapper[4752]: I0122 10:53:24.661131 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgrrh\" (UniqueName: \"kubernetes.io/projected/479a4500-cbcd-46d3-ba8c-5bc07a4bc459-kube-api-access-sgrrh\") pod \"479a4500-cbcd-46d3-ba8c-5bc07a4bc459\" (UID: \"479a4500-cbcd-46d3-ba8c-5bc07a4bc459\") " Jan 22 10:53:24 crc kubenswrapper[4752]: I0122 10:53:24.661268 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/479a4500-cbcd-46d3-ba8c-5bc07a4bc459-inventory\") pod \"479a4500-cbcd-46d3-ba8c-5bc07a4bc459\" (UID: \"479a4500-cbcd-46d3-ba8c-5bc07a4bc459\") " Jan 22 10:53:24 crc kubenswrapper[4752]: I0122 10:53:24.661538 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/479a4500-cbcd-46d3-ba8c-5bc07a4bc459-bootstrap-combined-ca-bundle\") pod \"479a4500-cbcd-46d3-ba8c-5bc07a4bc459\" (UID: \"479a4500-cbcd-46d3-ba8c-5bc07a4bc459\") " Jan 22 10:53:24 crc kubenswrapper[4752]: I0122 10:53:24.661671 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/479a4500-cbcd-46d3-ba8c-5bc07a4bc459-ssh-key-openstack-edpm-ipam\") pod \"479a4500-cbcd-46d3-ba8c-5bc07a4bc459\" (UID: \"479a4500-cbcd-46d3-ba8c-5bc07a4bc459\") " Jan 22 10:53:24 crc kubenswrapper[4752]: I0122 10:53:24.670674 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/479a4500-cbcd-46d3-ba8c-5bc07a4bc459-kube-api-access-sgrrh" (OuterVolumeSpecName: "kube-api-access-sgrrh") pod "479a4500-cbcd-46d3-ba8c-5bc07a4bc459" (UID: "479a4500-cbcd-46d3-ba8c-5bc07a4bc459"). InnerVolumeSpecName "kube-api-access-sgrrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:53:24 crc kubenswrapper[4752]: I0122 10:53:24.676007 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/479a4500-cbcd-46d3-ba8c-5bc07a4bc459-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "479a4500-cbcd-46d3-ba8c-5bc07a4bc459" (UID: "479a4500-cbcd-46d3-ba8c-5bc07a4bc459"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:53:24 crc kubenswrapper[4752]: I0122 10:53:24.714088 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/479a4500-cbcd-46d3-ba8c-5bc07a4bc459-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "479a4500-cbcd-46d3-ba8c-5bc07a4bc459" (UID: "479a4500-cbcd-46d3-ba8c-5bc07a4bc459"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:53:24 crc kubenswrapper[4752]: I0122 10:53:24.727428 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/479a4500-cbcd-46d3-ba8c-5bc07a4bc459-inventory" (OuterVolumeSpecName: "inventory") pod "479a4500-cbcd-46d3-ba8c-5bc07a4bc459" (UID: "479a4500-cbcd-46d3-ba8c-5bc07a4bc459"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:53:24 crc kubenswrapper[4752]: I0122 10:53:24.764810 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgrrh\" (UniqueName: \"kubernetes.io/projected/479a4500-cbcd-46d3-ba8c-5bc07a4bc459-kube-api-access-sgrrh\") on node \"crc\" DevicePath \"\"" Jan 22 10:53:24 crc kubenswrapper[4752]: I0122 10:53:24.765761 4752 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/479a4500-cbcd-46d3-ba8c-5bc07a4bc459-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 10:53:24 crc kubenswrapper[4752]: I0122 10:53:24.765794 4752 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/479a4500-cbcd-46d3-ba8c-5bc07a4bc459-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 10:53:24 crc kubenswrapper[4752]: I0122 10:53:24.765811 4752 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/479a4500-cbcd-46d3-ba8c-5bc07a4bc459-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 10:53:25 crc kubenswrapper[4752]: I0122 10:53:25.129391 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t" event={"ID":"479a4500-cbcd-46d3-ba8c-5bc07a4bc459","Type":"ContainerDied","Data":"4a64ebec44e045c1a8fa812f8beb87d6e440fa0e673c24f84c0e73fcd6ce7458"} Jan 22 10:53:25 crc kubenswrapper[4752]: I0122 10:53:25.129451 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a64ebec44e045c1a8fa812f8beb87d6e440fa0e673c24f84c0e73fcd6ce7458" Jan 22 10:53:25 crc kubenswrapper[4752]: I0122 10:53:25.129520 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-clx9t" Jan 22 10:53:25 crc kubenswrapper[4752]: I0122 10:53:25.298192 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pkcpg"] Jan 22 10:53:25 crc kubenswrapper[4752]: E0122 10:53:25.298670 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16347cd1-0e60-44f2-a87e-7db4e65d21d9" containerName="extract-utilities" Jan 22 10:53:25 crc kubenswrapper[4752]: I0122 10:53:25.298688 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="16347cd1-0e60-44f2-a87e-7db4e65d21d9" containerName="extract-utilities" Jan 22 10:53:25 crc kubenswrapper[4752]: E0122 10:53:25.298702 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16347cd1-0e60-44f2-a87e-7db4e65d21d9" containerName="extract-content" Jan 22 10:53:25 crc kubenswrapper[4752]: I0122 10:53:25.298709 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="16347cd1-0e60-44f2-a87e-7db4e65d21d9" containerName="extract-content" Jan 22 10:53:25 crc kubenswrapper[4752]: E0122 10:53:25.298728 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16347cd1-0e60-44f2-a87e-7db4e65d21d9" containerName="registry-server" Jan 22 10:53:25 crc kubenswrapper[4752]: I0122 10:53:25.298735 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="16347cd1-0e60-44f2-a87e-7db4e65d21d9" containerName="registry-server" Jan 22 10:53:25 crc kubenswrapper[4752]: E0122 10:53:25.298751 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="479a4500-cbcd-46d3-ba8c-5bc07a4bc459" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 22 10:53:25 crc kubenswrapper[4752]: I0122 10:53:25.298757 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="479a4500-cbcd-46d3-ba8c-5bc07a4bc459" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 22 10:53:25 crc kubenswrapper[4752]: I0122 10:53:25.298969 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="16347cd1-0e60-44f2-a87e-7db4e65d21d9" containerName="registry-server" Jan 22 10:53:25 crc kubenswrapper[4752]: I0122 10:53:25.298981 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="479a4500-cbcd-46d3-ba8c-5bc07a4bc459" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 22 10:53:25 crc kubenswrapper[4752]: I0122 10:53:25.299671 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pkcpg" Jan 22 10:53:25 crc kubenswrapper[4752]: I0122 10:53:25.304147 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 10:53:25 crc kubenswrapper[4752]: I0122 10:53:25.304571 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 10:53:25 crc kubenswrapper[4752]: I0122 10:53:25.304614 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 10:53:25 crc kubenswrapper[4752]: I0122 10:53:25.304617 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j8c6q" Jan 22 10:53:25 crc kubenswrapper[4752]: I0122 10:53:25.313481 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pkcpg"] Jan 22 10:53:25 crc kubenswrapper[4752]: I0122 10:53:25.379564 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25811352-0ef9-4bfa-aa58-5ef018aaf9c5-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pkcpg\" (UID: \"25811352-0ef9-4bfa-aa58-5ef018aaf9c5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pkcpg" Jan 22 10:53:25 crc kubenswrapper[4752]: I0122 10:53:25.379636 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25811352-0ef9-4bfa-aa58-5ef018aaf9c5-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pkcpg\" (UID: \"25811352-0ef9-4bfa-aa58-5ef018aaf9c5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pkcpg" Jan 22 10:53:25 crc kubenswrapper[4752]: I0122 10:53:25.379658 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w5bm\" (UniqueName: \"kubernetes.io/projected/25811352-0ef9-4bfa-aa58-5ef018aaf9c5-kube-api-access-4w5bm\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pkcpg\" (UID: \"25811352-0ef9-4bfa-aa58-5ef018aaf9c5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pkcpg" Jan 22 10:53:25 crc kubenswrapper[4752]: I0122 10:53:25.481955 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25811352-0ef9-4bfa-aa58-5ef018aaf9c5-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pkcpg\" (UID: \"25811352-0ef9-4bfa-aa58-5ef018aaf9c5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pkcpg" Jan 22 10:53:25 crc kubenswrapper[4752]: I0122 10:53:25.482027 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25811352-0ef9-4bfa-aa58-5ef018aaf9c5-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pkcpg\" (UID: \"25811352-0ef9-4bfa-aa58-5ef018aaf9c5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pkcpg" Jan 22 10:53:25 crc kubenswrapper[4752]: I0122 10:53:25.482051 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w5bm\" (UniqueName: \"kubernetes.io/projected/25811352-0ef9-4bfa-aa58-5ef018aaf9c5-kube-api-access-4w5bm\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pkcpg\" (UID: \"25811352-0ef9-4bfa-aa58-5ef018aaf9c5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pkcpg" Jan 22 10:53:25 crc kubenswrapper[4752]: I0122 10:53:25.488780 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25811352-0ef9-4bfa-aa58-5ef018aaf9c5-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pkcpg\" (UID: \"25811352-0ef9-4bfa-aa58-5ef018aaf9c5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pkcpg" Jan 22 10:53:25 crc kubenswrapper[4752]: I0122 10:53:25.489048 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25811352-0ef9-4bfa-aa58-5ef018aaf9c5-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pkcpg\" (UID: \"25811352-0ef9-4bfa-aa58-5ef018aaf9c5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pkcpg" Jan 22 10:53:25 crc kubenswrapper[4752]: I0122 10:53:25.501428 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w5bm\" (UniqueName: \"kubernetes.io/projected/25811352-0ef9-4bfa-aa58-5ef018aaf9c5-kube-api-access-4w5bm\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pkcpg\" (UID: \"25811352-0ef9-4bfa-aa58-5ef018aaf9c5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pkcpg" Jan 22 10:53:25 crc kubenswrapper[4752]: I0122 10:53:25.666967 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pkcpg" Jan 22 10:53:26 crc kubenswrapper[4752]: I0122 10:53:26.238207 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pkcpg"] Jan 22 10:53:27 crc kubenswrapper[4752]: I0122 10:53:27.154056 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pkcpg" event={"ID":"25811352-0ef9-4bfa-aa58-5ef018aaf9c5","Type":"ContainerStarted","Data":"41c19cf58f9cefa9d41984ec70ac46902743f4ef53c2ec19d756f62af4d2c6e3"} Jan 22 10:53:27 crc kubenswrapper[4752]: I0122 10:53:27.154409 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pkcpg" event={"ID":"25811352-0ef9-4bfa-aa58-5ef018aaf9c5","Type":"ContainerStarted","Data":"29f36e3ef8cc1707a57ad19f52a6f597ea7608f20aaf3e1a64ac088e1040fc29"} Jan 22 10:53:27 crc kubenswrapper[4752]: I0122 10:53:27.182591 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pkcpg" podStartSLOduration=1.754696295 podStartE2EDuration="2.182570616s" podCreationTimestamp="2026-01-22 10:53:25 +0000 UTC" firstStartedPulling="2026-01-22 10:53:26.220864473 +0000 UTC m=+1685.450807381" lastFinishedPulling="2026-01-22 10:53:26.648738764 +0000 UTC m=+1685.878681702" observedRunningTime="2026-01-22 10:53:27.177438572 +0000 UTC m=+1686.407381490" watchObservedRunningTime="2026-01-22 10:53:27.182570616 +0000 UTC m=+1686.412513534" Jan 22 10:53:27 crc kubenswrapper[4752]: I0122 10:53:27.724225 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 10:53:27 crc kubenswrapper[4752]: I0122 10:53:27.724317 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 10:53:27 crc kubenswrapper[4752]: I0122 10:53:27.724401 4752 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 10:53:27 crc kubenswrapper[4752]: I0122 10:53:27.725555 4752 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"95f49b8d3b2a29d6e49e059f56ddb1725ec652d91fd4d10fdf2aa8c455daa877"} pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 10:53:27 crc kubenswrapper[4752]: I0122 10:53:27.725661 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" containerID="cri-o://95f49b8d3b2a29d6e49e059f56ddb1725ec652d91fd4d10fdf2aa8c455daa877" gracePeriod=600 Jan 22 10:53:27 crc kubenswrapper[4752]: E0122 10:53:27.856712 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 10:53:28 crc kubenswrapper[4752]: I0122 10:53:28.169548 4752 generic.go:334] "Generic (PLEG): container finished" podID="eb8df70c-9474-4827-8831-f39fc6883d79" containerID="95f49b8d3b2a29d6e49e059f56ddb1725ec652d91fd4d10fdf2aa8c455daa877" exitCode=0 Jan 22 10:53:28 crc kubenswrapper[4752]: I0122 10:53:28.169628 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerDied","Data":"95f49b8d3b2a29d6e49e059f56ddb1725ec652d91fd4d10fdf2aa8c455daa877"} Jan 22 10:53:28 crc kubenswrapper[4752]: I0122 10:53:28.169709 4752 scope.go:117] "RemoveContainer" containerID="6f52f6d18c41bfe8f36bddff272721d0bfb8924dfca328885232f7fbfd3ac21f" Jan 22 10:53:28 crc kubenswrapper[4752]: I0122 10:53:28.170793 4752 scope.go:117] "RemoveContainer" containerID="95f49b8d3b2a29d6e49e059f56ddb1725ec652d91fd4d10fdf2aa8c455daa877" Jan 22 10:53:28 crc kubenswrapper[4752]: E0122 10:53:28.171381 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 10:53:29 crc kubenswrapper[4752]: I0122 10:53:29.057068 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-b87d-account-create-update-xj66b"] Jan 22 10:53:29 crc kubenswrapper[4752]: I0122 10:53:29.071629 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-b87d-account-create-update-xj66b"] Jan 22 10:53:29 crc kubenswrapper[4752]: I0122 10:53:29.089189 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-f88d-account-create-update-8pd96"] Jan 22 10:53:29 crc kubenswrapper[4752]: I0122 10:53:29.113337 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc3d4ddd-7439-4fd2-bc21-d57caa0910a1" path="/var/lib/kubelet/pods/dc3d4ddd-7439-4fd2-bc21-d57caa0910a1/volumes" Jan 22 10:53:29 crc kubenswrapper[4752]: I0122 10:53:29.114013 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-f88d-account-create-update-8pd96"] Jan 22 10:53:31 crc kubenswrapper[4752]: I0122 10:53:31.107399 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d82c4b09-addc-4fb4-97ab-aa791a082372" path="/var/lib/kubelet/pods/d82c4b09-addc-4fb4-97ab-aa791a082372/volumes" Jan 22 10:53:35 crc kubenswrapper[4752]: I0122 10:53:35.030363 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-pc4tl"] Jan 22 10:53:35 crc kubenswrapper[4752]: I0122 10:53:35.040021 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-sq5kv"] Jan 22 10:53:35 crc kubenswrapper[4752]: I0122 10:53:35.048034 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-pc4tl"] Jan 22 10:53:35 crc kubenswrapper[4752]: I0122 10:53:35.055638 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-sq5kv"] Jan 22 10:53:35 crc kubenswrapper[4752]: I0122 10:53:35.109731 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a68d4f1-5659-4fec-bcf4-c2d1276d56d4" path="/var/lib/kubelet/pods/0a68d4f1-5659-4fec-bcf4-c2d1276d56d4/volumes" Jan 22 10:53:35 crc kubenswrapper[4752]: I0122 10:53:35.110569 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="304e6123-7015-427f-a6bd-d950c0e6c7d3" path="/var/lib/kubelet/pods/304e6123-7015-427f-a6bd-d950c0e6c7d3/volumes" Jan 22 10:53:40 crc kubenswrapper[4752]: I0122 10:53:40.035123 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-rndfm"] Jan 22 10:53:40 crc kubenswrapper[4752]: I0122 10:53:40.046195 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-s2g9g"] Jan 22 10:53:40 crc kubenswrapper[4752]: I0122 10:53:40.055941 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-88fb-account-create-update-lg4pl"] Jan 22 10:53:40 crc kubenswrapper[4752]: I0122 10:53:40.065284 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-80e3-account-create-update-59d9n"] Jan 22 10:53:40 crc kubenswrapper[4752]: I0122 10:53:40.076905 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-s2g9g"] Jan 22 10:53:40 crc kubenswrapper[4752]: I0122 10:53:40.085662 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-80e3-account-create-update-59d9n"] Jan 22 10:53:40 crc kubenswrapper[4752]: I0122 10:53:40.093985 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-rndfm"] Jan 22 10:53:40 crc kubenswrapper[4752]: I0122 10:53:40.102100 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-88fb-account-create-update-lg4pl"] Jan 22 10:53:40 crc kubenswrapper[4752]: I0122 10:53:40.111816 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-rnpv4"] Jan 22 10:53:40 crc kubenswrapper[4752]: I0122 10:53:40.119928 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-rnpv4"] Jan 22 10:53:41 crc kubenswrapper[4752]: I0122 10:53:41.112526 4752 scope.go:117] "RemoveContainer" containerID="95f49b8d3b2a29d6e49e059f56ddb1725ec652d91fd4d10fdf2aa8c455daa877" Jan 22 10:53:41 crc kubenswrapper[4752]: E0122 10:53:41.113437 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 10:53:41 crc kubenswrapper[4752]: I0122 10:53:41.115939 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33a79a45-be65-4f75-be4a-75f5f1f87ce3" path="/var/lib/kubelet/pods/33a79a45-be65-4f75-be4a-75f5f1f87ce3/volumes" Jan 22 10:53:41 crc kubenswrapper[4752]: I0122 10:53:41.117541 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6604eed0-a1d2-4ac2-9dba-66e4228899ec" path="/var/lib/kubelet/pods/6604eed0-a1d2-4ac2-9dba-66e4228899ec/volumes" Jan 22 10:53:41 crc kubenswrapper[4752]: I0122 10:53:41.118800 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1709564-73da-4021-8b4a-865eb06625c0" path="/var/lib/kubelet/pods/b1709564-73da-4021-8b4a-865eb06625c0/volumes" Jan 22 10:53:41 crc kubenswrapper[4752]: I0122 10:53:41.120059 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd0a11ce-1fe6-4f39-b8f5-4fb45730b889" path="/var/lib/kubelet/pods/dd0a11ce-1fe6-4f39-b8f5-4fb45730b889/volumes" Jan 22 10:53:41 crc kubenswrapper[4752]: I0122 10:53:41.122121 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed2637ca-135b-4963-849a-d95c79b04aea" path="/var/lib/kubelet/pods/ed2637ca-135b-4963-849a-d95c79b04aea/volumes" Jan 22 10:53:44 crc kubenswrapper[4752]: I0122 10:53:44.027014 4752 scope.go:117] "RemoveContainer" containerID="c2d058308616b86ef40c87e03dbc40ff7ac121a980b90cbf24d2dafd01f35c68" Jan 22 10:53:44 crc kubenswrapper[4752]: I0122 10:53:44.053833 4752 scope.go:117] "RemoveContainer" containerID="f2ba9671f2daab8912b37e1b7f6806600dc899e7b316d9d26c2ed45560608890" Jan 22 10:53:44 crc kubenswrapper[4752]: I0122 10:53:44.123528 4752 scope.go:117] "RemoveContainer" containerID="98cb387208a42842e71f0bd2e2ae4e6be58ffec213c8a77534f9060a8acf3ac3" Jan 22 10:53:44 crc kubenswrapper[4752]: I0122 10:53:44.186684 4752 scope.go:117] "RemoveContainer" containerID="f03afed889aeec8cd1d80f13f06c4bbfa0d476c8e562f941b7d58c71e4988cc5" Jan 22 10:53:44 crc kubenswrapper[4752]: I0122 10:53:44.238433 4752 scope.go:117] "RemoveContainer" containerID="475df6400bc4e609255324901aca9a5c548fa920c1cd506dc7facd2a04735674" Jan 22 10:53:44 crc kubenswrapper[4752]: I0122 10:53:44.289085 4752 scope.go:117] "RemoveContainer" containerID="21e442c192ac915b8fdd6f9fc0e7c935fe04f0df6ef7df185c0258c7f8234413" Jan 22 10:53:44 crc kubenswrapper[4752]: I0122 10:53:44.341959 4752 scope.go:117] "RemoveContainer" containerID="a242997d949a0d12d9d6fb06607f89f71814f3a33cddbf2abaaf61569593b35d" Jan 22 10:53:44 crc kubenswrapper[4752]: I0122 10:53:44.376658 4752 scope.go:117] "RemoveContainer" containerID="034ed466fb3bcda9c3e1093ea0a58907cfbb749cd2e6d4ef6db45994d3f716b9" Jan 22 10:53:44 crc kubenswrapper[4752]: I0122 10:53:44.407179 4752 scope.go:117] "RemoveContainer" containerID="1768f4d270d89933ac6a508aeec4bc0a6dfb3b8bd6387ee54580d20315f08e4a" Jan 22 10:53:44 crc kubenswrapper[4752]: I0122 10:53:44.437567 4752 scope.go:117] "RemoveContainer" containerID="9626e8d48ea65bde987be02a31ed0dc20e0d191c0008f9d2e9ef114f50098b81" Jan 22 10:53:44 crc kubenswrapper[4752]: I0122 10:53:44.465243 4752 scope.go:117] "RemoveContainer" containerID="0d16ddbe20da4ac7dacb16a2105fe52a16fd8a4c6b4cccf80ee8e8296869b651" Jan 22 10:53:44 crc kubenswrapper[4752]: I0122 10:53:44.491428 4752 scope.go:117] "RemoveContainer" containerID="7f9d3fb55d1958f9f516555df3bd2b0ad40bd5f1a2288d1604ceeff2ef318ab9" Jan 22 10:53:44 crc kubenswrapper[4752]: I0122 10:53:44.517845 4752 scope.go:117] "RemoveContainer" containerID="994f82c7b47f667f80b18e8f580ac5009f1c30c5b348ea2df229dc5a8dce1df9" Jan 22 10:53:44 crc kubenswrapper[4752]: I0122 10:53:44.542954 4752 scope.go:117] "RemoveContainer" containerID="fe676518b26d54eb615a490cdc2c63b87adee6aaebf8ff041e50ba35cd94704d" Jan 22 10:53:44 crc kubenswrapper[4752]: I0122 10:53:44.589696 4752 scope.go:117] "RemoveContainer" containerID="b2526f80b0c3e824fb3dc632719617c537bb92006fb57553cec2626de389085e" Jan 22 10:53:44 crc kubenswrapper[4752]: I0122 10:53:44.619226 4752 scope.go:117] "RemoveContainer" containerID="948629349e0af91adf24411b75edfe432ea00c1c5c0cc022156ce2ea48364faf" Jan 22 10:53:44 crc kubenswrapper[4752]: I0122 10:53:44.643883 4752 scope.go:117] "RemoveContainer" containerID="1146b32f7ff484fd1ac7d7c5756595aa04e7293e5c331a8570f55ce4b012c520" Jan 22 10:53:44 crc kubenswrapper[4752]: I0122 10:53:44.687633 4752 scope.go:117] "RemoveContainer" containerID="7398562f3e4c81de74516e17d55f3b22e723dece0718cc7797fe38d521cb865c" Jan 22 10:53:53 crc kubenswrapper[4752]: I0122 10:53:53.098146 4752 scope.go:117] "RemoveContainer" containerID="95f49b8d3b2a29d6e49e059f56ddb1725ec652d91fd4d10fdf2aa8c455daa877" Jan 22 10:53:53 crc kubenswrapper[4752]: E0122 10:53:53.098917 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 10:53:59 crc kubenswrapper[4752]: I0122 10:53:59.048872 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-j4drz"] Jan 22 10:53:59 crc kubenswrapper[4752]: I0122 10:53:59.063794 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-vbsj2"] Jan 22 10:53:59 crc kubenswrapper[4752]: I0122 10:53:59.073564 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-vbsj2"] Jan 22 10:53:59 crc kubenswrapper[4752]: I0122 10:53:59.111439 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1192a1d-8861-4ce2-bfee-1360fecff6e7" path="/var/lib/kubelet/pods/a1192a1d-8861-4ce2-bfee-1360fecff6e7/volumes" Jan 22 10:53:59 crc kubenswrapper[4752]: I0122 10:53:59.112905 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-j4drz"] Jan 22 10:54:01 crc kubenswrapper[4752]: I0122 10:54:01.111440 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="446a849f-df12-4b01-8457-dd5c828dd567" path="/var/lib/kubelet/pods/446a849f-df12-4b01-8457-dd5c828dd567/volumes" Jan 22 10:54:07 crc kubenswrapper[4752]: I0122 10:54:07.098975 4752 scope.go:117] "RemoveContainer" containerID="95f49b8d3b2a29d6e49e059f56ddb1725ec652d91fd4d10fdf2aa8c455daa877" Jan 22 10:54:07 crc kubenswrapper[4752]: E0122 10:54:07.100253 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 10:54:19 crc kubenswrapper[4752]: I0122 10:54:19.098497 4752 scope.go:117] "RemoveContainer" containerID="95f49b8d3b2a29d6e49e059f56ddb1725ec652d91fd4d10fdf2aa8c455daa877" Jan 22 10:54:19 crc kubenswrapper[4752]: E0122 10:54:19.099196 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 10:54:34 crc kubenswrapper[4752]: I0122 10:54:34.097924 4752 scope.go:117] "RemoveContainer" containerID="95f49b8d3b2a29d6e49e059f56ddb1725ec652d91fd4d10fdf2aa8c455daa877" Jan 22 10:54:34 crc kubenswrapper[4752]: E0122 10:54:34.098697 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 10:54:36 crc kubenswrapper[4752]: I0122 10:54:36.069075 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-2nc8f"] Jan 22 10:54:36 crc kubenswrapper[4752]: I0122 10:54:36.081561 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-2nc8f"] Jan 22 10:54:37 crc kubenswrapper[4752]: I0122 10:54:37.116704 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="849caaec-8756-46a5-b544-8e914c0b022b" path="/var/lib/kubelet/pods/849caaec-8756-46a5-b544-8e914c0b022b/volumes" Jan 22 10:54:44 crc kubenswrapper[4752]: I0122 10:54:44.982491 4752 scope.go:117] "RemoveContainer" containerID="c22e2c024ebd331afd1ed05d3365b2ce8a025da7044071fe519e1bfb6951d935" Jan 22 10:54:45 crc kubenswrapper[4752]: I0122 10:54:45.038641 4752 scope.go:117] "RemoveContainer" containerID="c6856fed9b6819ff1583a4aa95c4cf5e495e35246e5f24e5cdb013b1fb81f02b" Jan 22 10:54:45 crc kubenswrapper[4752]: I0122 10:54:45.101868 4752 scope.go:117] "RemoveContainer" containerID="012473e05bb03dc9ef2dc19a2b157e7bf99d6ec67810ff680fcec83a0370879b" Jan 22 10:54:46 crc kubenswrapper[4752]: I0122 10:54:46.097747 4752 scope.go:117] "RemoveContainer" containerID="95f49b8d3b2a29d6e49e059f56ddb1725ec652d91fd4d10fdf2aa8c455daa877" Jan 22 10:54:46 crc kubenswrapper[4752]: E0122 10:54:46.098074 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 10:54:51 crc kubenswrapper[4752]: I0122 10:54:51.064753 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-hx4lb"] Jan 22 10:54:51 crc kubenswrapper[4752]: I0122 10:54:51.077524 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-hx4lb"] Jan 22 10:54:51 crc kubenswrapper[4752]: I0122 10:54:51.117639 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b" path="/var/lib/kubelet/pods/46dd30e3-47fd-4bd6-934e-cd7ac88e4f5b/volumes" Jan 22 10:54:53 crc kubenswrapper[4752]: I0122 10:54:53.044315 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-qht9v"] Jan 22 10:54:53 crc kubenswrapper[4752]: I0122 10:54:53.055675 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-qht9v"] Jan 22 10:54:53 crc kubenswrapper[4752]: I0122 10:54:53.068023 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-fwcgf"] Jan 22 10:54:53 crc kubenswrapper[4752]: I0122 10:54:53.077625 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-fwcgf"] Jan 22 10:54:53 crc kubenswrapper[4752]: I0122 10:54:53.114297 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="751b5593-10c9-46a0-bb4d-141ecbc13e10" path="/var/lib/kubelet/pods/751b5593-10c9-46a0-bb4d-141ecbc13e10/volumes" Jan 22 10:54:53 crc kubenswrapper[4752]: I0122 10:54:53.115352 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6b5da9f-e5d4-4629-89a9-1d215475d3bb" path="/var/lib/kubelet/pods/d6b5da9f-e5d4-4629-89a9-1d215475d3bb/volumes" Jan 22 10:54:54 crc kubenswrapper[4752]: I0122 10:54:54.033513 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-bznl9"] Jan 22 10:54:54 crc kubenswrapper[4752]: I0122 10:54:54.040785 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-bznl9"] Jan 22 10:54:55 crc kubenswrapper[4752]: I0122 10:54:55.112335 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8ef3108-d8e0-424d-be70-8bcab25d2c0b" path="/var/lib/kubelet/pods/a8ef3108-d8e0-424d-be70-8bcab25d2c0b/volumes" Jan 22 10:55:01 crc kubenswrapper[4752]: I0122 10:55:01.108442 4752 scope.go:117] "RemoveContainer" containerID="95f49b8d3b2a29d6e49e059f56ddb1725ec652d91fd4d10fdf2aa8c455daa877" Jan 22 10:55:01 crc kubenswrapper[4752]: E0122 10:55:01.109630 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 10:55:10 crc kubenswrapper[4752]: I0122 10:55:10.057538 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-6vrvk"] Jan 22 10:55:10 crc kubenswrapper[4752]: I0122 10:55:10.073848 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-6vrvk"] Jan 22 10:55:11 crc kubenswrapper[4752]: I0122 10:55:11.121962 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e48968c0-ac21-49af-9161-19bf5e37c9eb" path="/var/lib/kubelet/pods/e48968c0-ac21-49af-9161-19bf5e37c9eb/volumes" Jan 22 10:55:15 crc kubenswrapper[4752]: I0122 10:55:15.098259 4752 scope.go:117] "RemoveContainer" containerID="95f49b8d3b2a29d6e49e059f56ddb1725ec652d91fd4d10fdf2aa8c455daa877" Jan 22 10:55:15 crc kubenswrapper[4752]: E0122 10:55:15.099448 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 10:55:17 crc kubenswrapper[4752]: I0122 10:55:17.431693 4752 generic.go:334] "Generic (PLEG): container finished" podID="25811352-0ef9-4bfa-aa58-5ef018aaf9c5" containerID="41c19cf58f9cefa9d41984ec70ac46902743f4ef53c2ec19d756f62af4d2c6e3" exitCode=0 Jan 22 10:55:17 crc kubenswrapper[4752]: I0122 10:55:17.431837 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pkcpg" event={"ID":"25811352-0ef9-4bfa-aa58-5ef018aaf9c5","Type":"ContainerDied","Data":"41c19cf58f9cefa9d41984ec70ac46902743f4ef53c2ec19d756f62af4d2c6e3"} Jan 22 10:55:18 crc kubenswrapper[4752]: I0122 10:55:18.914697 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pkcpg" Jan 22 10:55:19 crc kubenswrapper[4752]: I0122 10:55:19.048577 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25811352-0ef9-4bfa-aa58-5ef018aaf9c5-inventory\") pod \"25811352-0ef9-4bfa-aa58-5ef018aaf9c5\" (UID: \"25811352-0ef9-4bfa-aa58-5ef018aaf9c5\") " Jan 22 10:55:19 crc kubenswrapper[4752]: I0122 10:55:19.048974 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4w5bm\" (UniqueName: \"kubernetes.io/projected/25811352-0ef9-4bfa-aa58-5ef018aaf9c5-kube-api-access-4w5bm\") pod \"25811352-0ef9-4bfa-aa58-5ef018aaf9c5\" (UID: \"25811352-0ef9-4bfa-aa58-5ef018aaf9c5\") " Jan 22 10:55:19 crc kubenswrapper[4752]: I0122 10:55:19.049016 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25811352-0ef9-4bfa-aa58-5ef018aaf9c5-ssh-key-openstack-edpm-ipam\") pod \"25811352-0ef9-4bfa-aa58-5ef018aaf9c5\" (UID: \"25811352-0ef9-4bfa-aa58-5ef018aaf9c5\") " Jan 22 10:55:19 crc kubenswrapper[4752]: I0122 10:55:19.064256 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25811352-0ef9-4bfa-aa58-5ef018aaf9c5-kube-api-access-4w5bm" (OuterVolumeSpecName: "kube-api-access-4w5bm") pod "25811352-0ef9-4bfa-aa58-5ef018aaf9c5" (UID: "25811352-0ef9-4bfa-aa58-5ef018aaf9c5"). InnerVolumeSpecName "kube-api-access-4w5bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:55:19 crc kubenswrapper[4752]: I0122 10:55:19.094073 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25811352-0ef9-4bfa-aa58-5ef018aaf9c5-inventory" (OuterVolumeSpecName: "inventory") pod "25811352-0ef9-4bfa-aa58-5ef018aaf9c5" (UID: "25811352-0ef9-4bfa-aa58-5ef018aaf9c5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:55:19 crc kubenswrapper[4752]: I0122 10:55:19.099197 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25811352-0ef9-4bfa-aa58-5ef018aaf9c5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "25811352-0ef9-4bfa-aa58-5ef018aaf9c5" (UID: "25811352-0ef9-4bfa-aa58-5ef018aaf9c5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:55:19 crc kubenswrapper[4752]: I0122 10:55:19.153838 4752 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25811352-0ef9-4bfa-aa58-5ef018aaf9c5-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:19 crc kubenswrapper[4752]: I0122 10:55:19.154073 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4w5bm\" (UniqueName: \"kubernetes.io/projected/25811352-0ef9-4bfa-aa58-5ef018aaf9c5-kube-api-access-4w5bm\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:19 crc kubenswrapper[4752]: I0122 10:55:19.154179 4752 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25811352-0ef9-4bfa-aa58-5ef018aaf9c5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 10:55:19 crc kubenswrapper[4752]: I0122 10:55:19.459037 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pkcpg" event={"ID":"25811352-0ef9-4bfa-aa58-5ef018aaf9c5","Type":"ContainerDied","Data":"29f36e3ef8cc1707a57ad19f52a6f597ea7608f20aaf3e1a64ac088e1040fc29"} Jan 22 10:55:19 crc kubenswrapper[4752]: I0122 10:55:19.459099 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29f36e3ef8cc1707a57ad19f52a6f597ea7608f20aaf3e1a64ac088e1040fc29" Jan 22 10:55:19 crc kubenswrapper[4752]: I0122 10:55:19.459177 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pkcpg" Jan 22 10:55:19 crc kubenswrapper[4752]: I0122 10:55:19.589064 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4nrj6"] Jan 22 10:55:19 crc kubenswrapper[4752]: E0122 10:55:19.589476 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25811352-0ef9-4bfa-aa58-5ef018aaf9c5" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 22 10:55:19 crc kubenswrapper[4752]: I0122 10:55:19.589492 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="25811352-0ef9-4bfa-aa58-5ef018aaf9c5" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 22 10:55:19 crc kubenswrapper[4752]: I0122 10:55:19.589779 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="25811352-0ef9-4bfa-aa58-5ef018aaf9c5" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 22 10:55:19 crc kubenswrapper[4752]: I0122 10:55:19.590632 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4nrj6" Jan 22 10:55:19 crc kubenswrapper[4752]: I0122 10:55:19.595194 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j8c6q" Jan 22 10:55:19 crc kubenswrapper[4752]: I0122 10:55:19.595211 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 10:55:19 crc kubenswrapper[4752]: I0122 10:55:19.595391 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 10:55:19 crc kubenswrapper[4752]: I0122 10:55:19.600500 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4nrj6"] Jan 22 10:55:19 crc kubenswrapper[4752]: I0122 10:55:19.601405 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 10:55:19 crc kubenswrapper[4752]: I0122 10:55:19.769553 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97a3d49a-de49-480c-8cae-0f000a7d1b8b-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4nrj6\" (UID: \"97a3d49a-de49-480c-8cae-0f000a7d1b8b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4nrj6" Jan 22 10:55:19 crc kubenswrapper[4752]: I0122 10:55:19.770028 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48tww\" (UniqueName: \"kubernetes.io/projected/97a3d49a-de49-480c-8cae-0f000a7d1b8b-kube-api-access-48tww\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4nrj6\" (UID: \"97a3d49a-de49-480c-8cae-0f000a7d1b8b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4nrj6" Jan 22 10:55:19 crc kubenswrapper[4752]: I0122 10:55:19.770479 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/97a3d49a-de49-480c-8cae-0f000a7d1b8b-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4nrj6\" (UID: \"97a3d49a-de49-480c-8cae-0f000a7d1b8b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4nrj6" Jan 22 10:55:19 crc kubenswrapper[4752]: I0122 10:55:19.872240 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/97a3d49a-de49-480c-8cae-0f000a7d1b8b-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4nrj6\" (UID: \"97a3d49a-de49-480c-8cae-0f000a7d1b8b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4nrj6" Jan 22 10:55:19 crc kubenswrapper[4752]: I0122 10:55:19.872322 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97a3d49a-de49-480c-8cae-0f000a7d1b8b-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4nrj6\" (UID: \"97a3d49a-de49-480c-8cae-0f000a7d1b8b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4nrj6" Jan 22 10:55:19 crc kubenswrapper[4752]: I0122 10:55:19.872398 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48tww\" (UniqueName: \"kubernetes.io/projected/97a3d49a-de49-480c-8cae-0f000a7d1b8b-kube-api-access-48tww\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4nrj6\" (UID: \"97a3d49a-de49-480c-8cae-0f000a7d1b8b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4nrj6" Jan 22 10:55:19 crc kubenswrapper[4752]: I0122 10:55:19.879353 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97a3d49a-de49-480c-8cae-0f000a7d1b8b-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4nrj6\" (UID: \"97a3d49a-de49-480c-8cae-0f000a7d1b8b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4nrj6" Jan 22 10:55:19 crc kubenswrapper[4752]: I0122 10:55:19.879961 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/97a3d49a-de49-480c-8cae-0f000a7d1b8b-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4nrj6\" (UID: \"97a3d49a-de49-480c-8cae-0f000a7d1b8b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4nrj6" Jan 22 10:55:19 crc kubenswrapper[4752]: I0122 10:55:19.890783 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48tww\" (UniqueName: \"kubernetes.io/projected/97a3d49a-de49-480c-8cae-0f000a7d1b8b-kube-api-access-48tww\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4nrj6\" (UID: \"97a3d49a-de49-480c-8cae-0f000a7d1b8b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4nrj6" Jan 22 10:55:19 crc kubenswrapper[4752]: I0122 10:55:19.942181 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4nrj6" Jan 22 10:55:20 crc kubenswrapper[4752]: I0122 10:55:20.529508 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4nrj6"] Jan 22 10:55:21 crc kubenswrapper[4752]: I0122 10:55:21.483967 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4nrj6" event={"ID":"97a3d49a-de49-480c-8cae-0f000a7d1b8b","Type":"ContainerStarted","Data":"7a6913933b5a215bb8aee4687491f66fe8cd958435462a1769bd6e8107a8b180"} Jan 22 10:55:21 crc kubenswrapper[4752]: I0122 10:55:21.484953 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4nrj6" event={"ID":"97a3d49a-de49-480c-8cae-0f000a7d1b8b","Type":"ContainerStarted","Data":"e535d160d9734387b4c4330ce2b43e9ea6de6680f8cdeadbb0c8d4886ca760b3"} Jan 22 10:55:21 crc kubenswrapper[4752]: I0122 10:55:21.507723 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4nrj6" podStartSLOduration=2.007286092 podStartE2EDuration="2.507699764s" podCreationTimestamp="2026-01-22 10:55:19 +0000 UTC" firstStartedPulling="2026-01-22 10:55:20.532537849 +0000 UTC m=+1799.762480767" lastFinishedPulling="2026-01-22 10:55:21.032951511 +0000 UTC m=+1800.262894439" observedRunningTime="2026-01-22 10:55:21.501891132 +0000 UTC m=+1800.731834040" watchObservedRunningTime="2026-01-22 10:55:21.507699764 +0000 UTC m=+1800.737642672" Jan 22 10:55:27 crc kubenswrapper[4752]: I0122 10:55:27.098901 4752 scope.go:117] "RemoveContainer" containerID="95f49b8d3b2a29d6e49e059f56ddb1725ec652d91fd4d10fdf2aa8c455daa877" Jan 22 10:55:27 crc kubenswrapper[4752]: E0122 10:55:27.100003 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 10:55:42 crc kubenswrapper[4752]: I0122 10:55:42.098623 4752 scope.go:117] "RemoveContainer" containerID="95f49b8d3b2a29d6e49e059f56ddb1725ec652d91fd4d10fdf2aa8c455daa877" Jan 22 10:55:42 crc kubenswrapper[4752]: E0122 10:55:42.099782 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 10:55:45 crc kubenswrapper[4752]: I0122 10:55:45.234258 4752 scope.go:117] "RemoveContainer" containerID="f247efb91eabfe37e9509c27f0392b9f4e1adc74db40482c9c27e41d5d314581" Jan 22 10:55:45 crc kubenswrapper[4752]: I0122 10:55:45.280963 4752 scope.go:117] "RemoveContainer" containerID="72ee64ff4bceff348d88f1ce9e14116bf2c55f9807a82f7e142d4ef8da3ac678" Jan 22 10:55:45 crc kubenswrapper[4752]: I0122 10:55:45.343308 4752 scope.go:117] "RemoveContainer" containerID="212946e0b17e97fef2a8b480531a5ebbe9538ec76317ca7954449d750273cf32" Jan 22 10:55:45 crc kubenswrapper[4752]: I0122 10:55:45.383769 4752 scope.go:117] "RemoveContainer" containerID="369d75194e907a9b894f8ee65e5481b809e7bfb63d02aac3dc501f1bf7ff3256" Jan 22 10:55:45 crc kubenswrapper[4752]: I0122 10:55:45.442430 4752 scope.go:117] "RemoveContainer" containerID="29bc997ed8f486a7121465c5267eda002b975eb11f95217e63829bcb7ea468d1" Jan 22 10:55:55 crc kubenswrapper[4752]: I0122 10:55:55.100746 4752 scope.go:117] "RemoveContainer" containerID="95f49b8d3b2a29d6e49e059f56ddb1725ec652d91fd4d10fdf2aa8c455daa877" Jan 22 10:55:55 crc kubenswrapper[4752]: E0122 10:55:55.103091 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 10:55:56 crc kubenswrapper[4752]: I0122 10:55:56.054663 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-nl2ct"] Jan 22 10:55:56 crc kubenswrapper[4752]: I0122 10:55:56.067816 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-9rmfm"] Jan 22 10:55:56 crc kubenswrapper[4752]: I0122 10:55:56.079160 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-9rmfm"] Jan 22 10:55:56 crc kubenswrapper[4752]: I0122 10:55:56.089613 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-nl2ct"] Jan 22 10:55:56 crc kubenswrapper[4752]: I0122 10:55:56.100239 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-pdjvk"] Jan 22 10:55:56 crc kubenswrapper[4752]: I0122 10:55:56.110948 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-pdjvk"] Jan 22 10:55:57 crc kubenswrapper[4752]: I0122 10:55:57.032036 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-c5fa-account-create-update-w7sxp"] Jan 22 10:55:57 crc kubenswrapper[4752]: I0122 10:55:57.048431 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-301c-account-create-update-w7p2q"] Jan 22 10:55:57 crc kubenswrapper[4752]: I0122 10:55:57.062939 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-fb42-account-create-update-gmwjz"] Jan 22 10:55:57 crc kubenswrapper[4752]: I0122 10:55:57.072322 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-301c-account-create-update-w7p2q"] Jan 22 10:55:57 crc kubenswrapper[4752]: I0122 10:55:57.079333 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-fb42-account-create-update-gmwjz"] Jan 22 10:55:57 crc kubenswrapper[4752]: I0122 10:55:57.085945 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-c5fa-account-create-update-w7sxp"] Jan 22 10:55:57 crc kubenswrapper[4752]: I0122 10:55:57.109711 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13730739-d528-4830-8bad-72e01aa444fa" path="/var/lib/kubelet/pods/13730739-d528-4830-8bad-72e01aa444fa/volumes" Jan 22 10:55:57 crc kubenswrapper[4752]: I0122 10:55:57.110601 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dc961e1-8eef-4fc2-a8da-fd17a08756f8" path="/var/lib/kubelet/pods/3dc961e1-8eef-4fc2-a8da-fd17a08756f8/volumes" Jan 22 10:55:57 crc kubenswrapper[4752]: I0122 10:55:57.111449 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d9c0ce2-be7f-447d-b25f-2f4842f3e728" path="/var/lib/kubelet/pods/4d9c0ce2-be7f-447d-b25f-2f4842f3e728/volumes" Jan 22 10:55:57 crc kubenswrapper[4752]: I0122 10:55:57.112350 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86f47086-110a-4d61-a140-ce98aeb0e321" path="/var/lib/kubelet/pods/86f47086-110a-4d61-a140-ce98aeb0e321/volumes" Jan 22 10:55:57 crc kubenswrapper[4752]: I0122 10:55:57.114115 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96960c39-6790-47bb-9f2d-9bc3aec15e70" path="/var/lib/kubelet/pods/96960c39-6790-47bb-9f2d-9bc3aec15e70/volumes" Jan 22 10:55:57 crc kubenswrapper[4752]: I0122 10:55:57.114996 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dac49828-ebf2-410f-8bfd-37f8840d141d" path="/var/lib/kubelet/pods/dac49828-ebf2-410f-8bfd-37f8840d141d/volumes" Jan 22 10:56:06 crc kubenswrapper[4752]: I0122 10:56:06.098818 4752 scope.go:117] "RemoveContainer" containerID="95f49b8d3b2a29d6e49e059f56ddb1725ec652d91fd4d10fdf2aa8c455daa877" Jan 22 10:56:06 crc kubenswrapper[4752]: E0122 10:56:06.099688 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 10:56:18 crc kubenswrapper[4752]: I0122 10:56:18.097632 4752 scope.go:117] "RemoveContainer" containerID="95f49b8d3b2a29d6e49e059f56ddb1725ec652d91fd4d10fdf2aa8c455daa877" Jan 22 10:56:18 crc kubenswrapper[4752]: E0122 10:56:18.098593 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 10:56:32 crc kubenswrapper[4752]: I0122 10:56:32.097779 4752 scope.go:117] "RemoveContainer" containerID="95f49b8d3b2a29d6e49e059f56ddb1725ec652d91fd4d10fdf2aa8c455daa877" Jan 22 10:56:32 crc kubenswrapper[4752]: E0122 10:56:32.098653 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 10:56:33 crc kubenswrapper[4752]: I0122 10:56:33.054431 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jjn85"] Jan 22 10:56:33 crc kubenswrapper[4752]: I0122 10:56:33.065974 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jjn85"] Jan 22 10:56:33 crc kubenswrapper[4752]: I0122 10:56:33.113261 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e16246f9-0755-4513-bd29-c487e9491528" path="/var/lib/kubelet/pods/e16246f9-0755-4513-bd29-c487e9491528/volumes" Jan 22 10:56:41 crc kubenswrapper[4752]: I0122 10:56:41.300049 4752 generic.go:334] "Generic (PLEG): container finished" podID="97a3d49a-de49-480c-8cae-0f000a7d1b8b" containerID="7a6913933b5a215bb8aee4687491f66fe8cd958435462a1769bd6e8107a8b180" exitCode=0 Jan 22 10:56:41 crc kubenswrapper[4752]: I0122 10:56:41.300230 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4nrj6" event={"ID":"97a3d49a-de49-480c-8cae-0f000a7d1b8b","Type":"ContainerDied","Data":"7a6913933b5a215bb8aee4687491f66fe8cd958435462a1769bd6e8107a8b180"} Jan 22 10:56:42 crc kubenswrapper[4752]: I0122 10:56:42.751016 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4nrj6" Jan 22 10:56:42 crc kubenswrapper[4752]: I0122 10:56:42.936140 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97a3d49a-de49-480c-8cae-0f000a7d1b8b-inventory\") pod \"97a3d49a-de49-480c-8cae-0f000a7d1b8b\" (UID: \"97a3d49a-de49-480c-8cae-0f000a7d1b8b\") " Jan 22 10:56:42 crc kubenswrapper[4752]: I0122 10:56:42.936355 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48tww\" (UniqueName: \"kubernetes.io/projected/97a3d49a-de49-480c-8cae-0f000a7d1b8b-kube-api-access-48tww\") pod \"97a3d49a-de49-480c-8cae-0f000a7d1b8b\" (UID: \"97a3d49a-de49-480c-8cae-0f000a7d1b8b\") " Jan 22 10:56:42 crc kubenswrapper[4752]: I0122 10:56:42.936483 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/97a3d49a-de49-480c-8cae-0f000a7d1b8b-ssh-key-openstack-edpm-ipam\") pod \"97a3d49a-de49-480c-8cae-0f000a7d1b8b\" (UID: \"97a3d49a-de49-480c-8cae-0f000a7d1b8b\") " Jan 22 10:56:42 crc kubenswrapper[4752]: I0122 10:56:42.943814 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97a3d49a-de49-480c-8cae-0f000a7d1b8b-kube-api-access-48tww" (OuterVolumeSpecName: "kube-api-access-48tww") pod "97a3d49a-de49-480c-8cae-0f000a7d1b8b" (UID: "97a3d49a-de49-480c-8cae-0f000a7d1b8b"). InnerVolumeSpecName "kube-api-access-48tww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:56:42 crc kubenswrapper[4752]: I0122 10:56:42.967599 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97a3d49a-de49-480c-8cae-0f000a7d1b8b-inventory" (OuterVolumeSpecName: "inventory") pod "97a3d49a-de49-480c-8cae-0f000a7d1b8b" (UID: "97a3d49a-de49-480c-8cae-0f000a7d1b8b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:56:42 crc kubenswrapper[4752]: I0122 10:56:42.967840 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97a3d49a-de49-480c-8cae-0f000a7d1b8b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "97a3d49a-de49-480c-8cae-0f000a7d1b8b" (UID: "97a3d49a-de49-480c-8cae-0f000a7d1b8b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:56:43 crc kubenswrapper[4752]: I0122 10:56:43.038923 4752 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/97a3d49a-de49-480c-8cae-0f000a7d1b8b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:43 crc kubenswrapper[4752]: I0122 10:56:43.038964 4752 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97a3d49a-de49-480c-8cae-0f000a7d1b8b-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:43 crc kubenswrapper[4752]: I0122 10:56:43.038978 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48tww\" (UniqueName: \"kubernetes.io/projected/97a3d49a-de49-480c-8cae-0f000a7d1b8b-kube-api-access-48tww\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:43 crc kubenswrapper[4752]: I0122 10:56:43.321255 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4nrj6" event={"ID":"97a3d49a-de49-480c-8cae-0f000a7d1b8b","Type":"ContainerDied","Data":"e535d160d9734387b4c4330ce2b43e9ea6de6680f8cdeadbb0c8d4886ca760b3"} Jan 22 10:56:43 crc kubenswrapper[4752]: I0122 10:56:43.321316 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e535d160d9734387b4c4330ce2b43e9ea6de6680f8cdeadbb0c8d4886ca760b3" Jan 22 10:56:43 crc kubenswrapper[4752]: I0122 10:56:43.321394 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4nrj6" Jan 22 10:56:43 crc kubenswrapper[4752]: I0122 10:56:43.410075 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-27db8"] Jan 22 10:56:43 crc kubenswrapper[4752]: E0122 10:56:43.410457 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a3d49a-de49-480c-8cae-0f000a7d1b8b" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 22 10:56:43 crc kubenswrapper[4752]: I0122 10:56:43.410476 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a3d49a-de49-480c-8cae-0f000a7d1b8b" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 22 10:56:43 crc kubenswrapper[4752]: I0122 10:56:43.410697 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="97a3d49a-de49-480c-8cae-0f000a7d1b8b" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 22 10:56:43 crc kubenswrapper[4752]: I0122 10:56:43.411490 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-27db8" Jan 22 10:56:43 crc kubenswrapper[4752]: I0122 10:56:43.415371 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 10:56:43 crc kubenswrapper[4752]: I0122 10:56:43.415712 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 10:56:43 crc kubenswrapper[4752]: I0122 10:56:43.417222 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j8c6q" Jan 22 10:56:43 crc kubenswrapper[4752]: I0122 10:56:43.417254 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 10:56:43 crc kubenswrapper[4752]: I0122 10:56:43.440765 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-27db8"] Jan 22 10:56:43 crc kubenswrapper[4752]: I0122 10:56:43.549579 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqfng\" (UniqueName: \"kubernetes.io/projected/a1c80d78-7197-49ef-a772-e8040ab5b6ae-kube-api-access-sqfng\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-27db8\" (UID: \"a1c80d78-7197-49ef-a772-e8040ab5b6ae\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-27db8" Jan 22 10:56:43 crc kubenswrapper[4752]: I0122 10:56:43.549985 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1c80d78-7197-49ef-a772-e8040ab5b6ae-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-27db8\" (UID: \"a1c80d78-7197-49ef-a772-e8040ab5b6ae\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-27db8" Jan 22 10:56:43 crc kubenswrapper[4752]: I0122 10:56:43.550019 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a1c80d78-7197-49ef-a772-e8040ab5b6ae-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-27db8\" (UID: \"a1c80d78-7197-49ef-a772-e8040ab5b6ae\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-27db8" Jan 22 10:56:43 crc kubenswrapper[4752]: I0122 10:56:43.651497 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqfng\" (UniqueName: \"kubernetes.io/projected/a1c80d78-7197-49ef-a772-e8040ab5b6ae-kube-api-access-sqfng\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-27db8\" (UID: \"a1c80d78-7197-49ef-a772-e8040ab5b6ae\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-27db8" Jan 22 10:56:43 crc kubenswrapper[4752]: I0122 10:56:43.651545 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1c80d78-7197-49ef-a772-e8040ab5b6ae-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-27db8\" (UID: \"a1c80d78-7197-49ef-a772-e8040ab5b6ae\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-27db8" Jan 22 10:56:43 crc kubenswrapper[4752]: I0122 10:56:43.651581 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a1c80d78-7197-49ef-a772-e8040ab5b6ae-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-27db8\" (UID: \"a1c80d78-7197-49ef-a772-e8040ab5b6ae\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-27db8" Jan 22 10:56:43 crc kubenswrapper[4752]: I0122 10:56:43.658520 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1c80d78-7197-49ef-a772-e8040ab5b6ae-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-27db8\" (UID: \"a1c80d78-7197-49ef-a772-e8040ab5b6ae\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-27db8" Jan 22 10:56:43 crc kubenswrapper[4752]: I0122 10:56:43.662261 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a1c80d78-7197-49ef-a772-e8040ab5b6ae-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-27db8\" (UID: \"a1c80d78-7197-49ef-a772-e8040ab5b6ae\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-27db8" Jan 22 10:56:43 crc kubenswrapper[4752]: I0122 10:56:43.689869 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqfng\" (UniqueName: \"kubernetes.io/projected/a1c80d78-7197-49ef-a772-e8040ab5b6ae-kube-api-access-sqfng\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-27db8\" (UID: \"a1c80d78-7197-49ef-a772-e8040ab5b6ae\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-27db8" Jan 22 10:56:43 crc kubenswrapper[4752]: I0122 10:56:43.728217 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-27db8" Jan 22 10:56:44 crc kubenswrapper[4752]: I0122 10:56:44.223246 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-27db8"] Jan 22 10:56:44 crc kubenswrapper[4752]: I0122 10:56:44.229199 4752 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 10:56:44 crc kubenswrapper[4752]: I0122 10:56:44.332649 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-27db8" event={"ID":"a1c80d78-7197-49ef-a772-e8040ab5b6ae","Type":"ContainerStarted","Data":"e6a4ef55d2c760da29297bd7668a734077edbd13800a00d12c03534d65a4e96b"} Jan 22 10:56:45 crc kubenswrapper[4752]: I0122 10:56:45.341880 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-27db8" event={"ID":"a1c80d78-7197-49ef-a772-e8040ab5b6ae","Type":"ContainerStarted","Data":"d1d35e25ae9878bcd929805bd30b505f2ed9eb387568bfa74aa4652a89b1a515"} Jan 22 10:56:45 crc kubenswrapper[4752]: I0122 10:56:45.361951 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-27db8" podStartSLOduration=1.9028148360000001 podStartE2EDuration="2.36192801s" podCreationTimestamp="2026-01-22 10:56:43 +0000 UTC" firstStartedPulling="2026-01-22 10:56:44.22893286 +0000 UTC m=+1883.458875768" lastFinishedPulling="2026-01-22 10:56:44.688046034 +0000 UTC m=+1883.917988942" observedRunningTime="2026-01-22 10:56:45.355284326 +0000 UTC m=+1884.585227244" watchObservedRunningTime="2026-01-22 10:56:45.36192801 +0000 UTC m=+1884.591870918" Jan 22 10:56:45 crc kubenswrapper[4752]: I0122 10:56:45.609221 4752 scope.go:117] "RemoveContainer" containerID="9029d94e65f557502056abf7f66d890f00d20a41d3b3338943fc50c2f25d60a5" Jan 22 10:56:45 crc kubenswrapper[4752]: I0122 10:56:45.640693 4752 scope.go:117] "RemoveContainer" containerID="52b0fd97c04749e302d3b84d9a789ebae6a804935036b4c1aa4b8b1bef7f6b72" Jan 22 10:56:45 crc kubenswrapper[4752]: I0122 10:56:45.689669 4752 scope.go:117] "RemoveContainer" containerID="1b085abfd0322d431c88e079873ce548e193962ad20113ed80534157e59ad3a2" Jan 22 10:56:45 crc kubenswrapper[4752]: I0122 10:56:45.760185 4752 scope.go:117] "RemoveContainer" containerID="f760cd267e82d1783e650ca32e8191016404196906843b1f89b1fc4bf2d6f723" Jan 22 10:56:45 crc kubenswrapper[4752]: I0122 10:56:45.810401 4752 scope.go:117] "RemoveContainer" containerID="59361de9a7c4ec43013d1bf057312ca321e7fee12c68cedac44854ccff024721" Jan 22 10:56:45 crc kubenswrapper[4752]: I0122 10:56:45.856550 4752 scope.go:117] "RemoveContainer" containerID="090092f5caac1739a8408990ae61fc2e9e03bce7c48a426f8d670e51d293979c" Jan 22 10:56:45 crc kubenswrapper[4752]: I0122 10:56:45.903050 4752 scope.go:117] "RemoveContainer" containerID="2120b782fee42af709256ddb90e98e7d7ab1d962510b5987967446d307b05f81" Jan 22 10:56:47 crc kubenswrapper[4752]: I0122 10:56:47.098266 4752 scope.go:117] "RemoveContainer" containerID="95f49b8d3b2a29d6e49e059f56ddb1725ec652d91fd4d10fdf2aa8c455daa877" Jan 22 10:56:47 crc kubenswrapper[4752]: E0122 10:56:47.098750 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 10:56:50 crc kubenswrapper[4752]: I0122 10:56:50.399829 4752 generic.go:334] "Generic (PLEG): container finished" podID="a1c80d78-7197-49ef-a772-e8040ab5b6ae" containerID="d1d35e25ae9878bcd929805bd30b505f2ed9eb387568bfa74aa4652a89b1a515" exitCode=0 Jan 22 10:56:50 crc kubenswrapper[4752]: I0122 10:56:50.399922 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-27db8" event={"ID":"a1c80d78-7197-49ef-a772-e8040ab5b6ae","Type":"ContainerDied","Data":"d1d35e25ae9878bcd929805bd30b505f2ed9eb387568bfa74aa4652a89b1a515"} Jan 22 10:56:51 crc kubenswrapper[4752]: I0122 10:56:51.873795 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-27db8" Jan 22 10:56:52 crc kubenswrapper[4752]: I0122 10:56:52.022310 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqfng\" (UniqueName: \"kubernetes.io/projected/a1c80d78-7197-49ef-a772-e8040ab5b6ae-kube-api-access-sqfng\") pod \"a1c80d78-7197-49ef-a772-e8040ab5b6ae\" (UID: \"a1c80d78-7197-49ef-a772-e8040ab5b6ae\") " Jan 22 10:56:52 crc kubenswrapper[4752]: I0122 10:56:52.022435 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1c80d78-7197-49ef-a772-e8040ab5b6ae-inventory\") pod \"a1c80d78-7197-49ef-a772-e8040ab5b6ae\" (UID: \"a1c80d78-7197-49ef-a772-e8040ab5b6ae\") " Jan 22 10:56:52 crc kubenswrapper[4752]: I0122 10:56:52.022535 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a1c80d78-7197-49ef-a772-e8040ab5b6ae-ssh-key-openstack-edpm-ipam\") pod \"a1c80d78-7197-49ef-a772-e8040ab5b6ae\" (UID: \"a1c80d78-7197-49ef-a772-e8040ab5b6ae\") " Jan 22 10:56:52 crc kubenswrapper[4752]: I0122 10:56:52.031146 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1c80d78-7197-49ef-a772-e8040ab5b6ae-kube-api-access-sqfng" (OuterVolumeSpecName: "kube-api-access-sqfng") pod "a1c80d78-7197-49ef-a772-e8040ab5b6ae" (UID: "a1c80d78-7197-49ef-a772-e8040ab5b6ae"). InnerVolumeSpecName "kube-api-access-sqfng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:56:52 crc kubenswrapper[4752]: E0122 10:56:52.049363 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1c80d78-7197-49ef-a772-e8040ab5b6ae-ssh-key-openstack-edpm-ipam podName:a1c80d78-7197-49ef-a772-e8040ab5b6ae nodeName:}" failed. No retries permitted until 2026-01-22 10:56:52.549332824 +0000 UTC m=+1891.779275732 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key-openstack-edpm-ipam" (UniqueName: "kubernetes.io/secret/a1c80d78-7197-49ef-a772-e8040ab5b6ae-ssh-key-openstack-edpm-ipam") pod "a1c80d78-7197-49ef-a772-e8040ab5b6ae" (UID: "a1c80d78-7197-49ef-a772-e8040ab5b6ae") : error deleting /var/lib/kubelet/pods/a1c80d78-7197-49ef-a772-e8040ab5b6ae/volume-subpaths: remove /var/lib/kubelet/pods/a1c80d78-7197-49ef-a772-e8040ab5b6ae/volume-subpaths: no such file or directory Jan 22 10:56:52 crc kubenswrapper[4752]: I0122 10:56:52.051884 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1c80d78-7197-49ef-a772-e8040ab5b6ae-inventory" (OuterVolumeSpecName: "inventory") pod "a1c80d78-7197-49ef-a772-e8040ab5b6ae" (UID: "a1c80d78-7197-49ef-a772-e8040ab5b6ae"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:56:52 crc kubenswrapper[4752]: I0122 10:56:52.124288 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqfng\" (UniqueName: \"kubernetes.io/projected/a1c80d78-7197-49ef-a772-e8040ab5b6ae-kube-api-access-sqfng\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:52 crc kubenswrapper[4752]: I0122 10:56:52.124320 4752 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1c80d78-7197-49ef-a772-e8040ab5b6ae-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:52 crc kubenswrapper[4752]: I0122 10:56:52.421571 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-27db8" event={"ID":"a1c80d78-7197-49ef-a772-e8040ab5b6ae","Type":"ContainerDied","Data":"e6a4ef55d2c760da29297bd7668a734077edbd13800a00d12c03534d65a4e96b"} Jan 22 10:56:52 crc kubenswrapper[4752]: I0122 10:56:52.421635 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6a4ef55d2c760da29297bd7668a734077edbd13800a00d12c03534d65a4e96b" Jan 22 10:56:52 crc kubenswrapper[4752]: I0122 10:56:52.421634 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-27db8" Jan 22 10:56:52 crc kubenswrapper[4752]: I0122 10:56:52.524354 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-89xm9"] Jan 22 10:56:52 crc kubenswrapper[4752]: E0122 10:56:52.525005 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1c80d78-7197-49ef-a772-e8040ab5b6ae" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 22 10:56:52 crc kubenswrapper[4752]: I0122 10:56:52.525040 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1c80d78-7197-49ef-a772-e8040ab5b6ae" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 22 10:56:52 crc kubenswrapper[4752]: I0122 10:56:52.525419 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1c80d78-7197-49ef-a772-e8040ab5b6ae" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 22 10:56:52 crc kubenswrapper[4752]: I0122 10:56:52.526579 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89xm9" Jan 22 10:56:52 crc kubenswrapper[4752]: I0122 10:56:52.541961 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-89xm9"] Jan 22 10:56:52 crc kubenswrapper[4752]: I0122 10:56:52.635237 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a1c80d78-7197-49ef-a772-e8040ab5b6ae-ssh-key-openstack-edpm-ipam\") pod \"a1c80d78-7197-49ef-a772-e8040ab5b6ae\" (UID: \"a1c80d78-7197-49ef-a772-e8040ab5b6ae\") " Jan 22 10:56:52 crc kubenswrapper[4752]: I0122 10:56:52.635806 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfcba2e1-e626-4899-8fa9-22e0e93d561f-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-89xm9\" (UID: \"dfcba2e1-e626-4899-8fa9-22e0e93d561f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89xm9" Jan 22 10:56:52 crc kubenswrapper[4752]: I0122 10:56:52.635955 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfcba2e1-e626-4899-8fa9-22e0e93d561f-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-89xm9\" (UID: \"dfcba2e1-e626-4899-8fa9-22e0e93d561f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89xm9" Jan 22 10:56:52 crc kubenswrapper[4752]: I0122 10:56:52.636015 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jds6\" (UniqueName: \"kubernetes.io/projected/dfcba2e1-e626-4899-8fa9-22e0e93d561f-kube-api-access-4jds6\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-89xm9\" (UID: \"dfcba2e1-e626-4899-8fa9-22e0e93d561f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89xm9" Jan 22 10:56:52 crc kubenswrapper[4752]: I0122 10:56:52.639098 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1c80d78-7197-49ef-a772-e8040ab5b6ae-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a1c80d78-7197-49ef-a772-e8040ab5b6ae" (UID: "a1c80d78-7197-49ef-a772-e8040ab5b6ae"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:56:52 crc kubenswrapper[4752]: I0122 10:56:52.737380 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfcba2e1-e626-4899-8fa9-22e0e93d561f-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-89xm9\" (UID: \"dfcba2e1-e626-4899-8fa9-22e0e93d561f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89xm9" Jan 22 10:56:52 crc kubenswrapper[4752]: I0122 10:56:52.737450 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jds6\" (UniqueName: \"kubernetes.io/projected/dfcba2e1-e626-4899-8fa9-22e0e93d561f-kube-api-access-4jds6\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-89xm9\" (UID: \"dfcba2e1-e626-4899-8fa9-22e0e93d561f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89xm9" Jan 22 10:56:52 crc kubenswrapper[4752]: I0122 10:56:52.737686 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfcba2e1-e626-4899-8fa9-22e0e93d561f-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-89xm9\" (UID: \"dfcba2e1-e626-4899-8fa9-22e0e93d561f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89xm9" Jan 22 10:56:52 crc kubenswrapper[4752]: I0122 10:56:52.737785 4752 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a1c80d78-7197-49ef-a772-e8040ab5b6ae-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 10:56:52 crc kubenswrapper[4752]: I0122 10:56:52.741362 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfcba2e1-e626-4899-8fa9-22e0e93d561f-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-89xm9\" (UID: \"dfcba2e1-e626-4899-8fa9-22e0e93d561f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89xm9" Jan 22 10:56:52 crc kubenswrapper[4752]: I0122 10:56:52.741485 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfcba2e1-e626-4899-8fa9-22e0e93d561f-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-89xm9\" (UID: \"dfcba2e1-e626-4899-8fa9-22e0e93d561f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89xm9" Jan 22 10:56:52 crc kubenswrapper[4752]: I0122 10:56:52.758133 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jds6\" (UniqueName: \"kubernetes.io/projected/dfcba2e1-e626-4899-8fa9-22e0e93d561f-kube-api-access-4jds6\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-89xm9\" (UID: \"dfcba2e1-e626-4899-8fa9-22e0e93d561f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89xm9" Jan 22 10:56:52 crc kubenswrapper[4752]: I0122 10:56:52.867693 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89xm9" Jan 22 10:56:53 crc kubenswrapper[4752]: I0122 10:56:53.215482 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-89xm9"] Jan 22 10:56:53 crc kubenswrapper[4752]: I0122 10:56:53.428982 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89xm9" event={"ID":"dfcba2e1-e626-4899-8fa9-22e0e93d561f","Type":"ContainerStarted","Data":"22300c92a67b31e327a3d653eef685ca306602221099fcac4b173001f21d23ab"} Jan 22 10:56:54 crc kubenswrapper[4752]: I0122 10:56:54.442178 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89xm9" event={"ID":"dfcba2e1-e626-4899-8fa9-22e0e93d561f","Type":"ContainerStarted","Data":"20347c3b9766bacecd27be399ad5dee95474766be9138e29afbd926b5793dc0f"} Jan 22 10:56:54 crc kubenswrapper[4752]: I0122 10:56:54.457655 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89xm9" podStartSLOduration=2.047027075 podStartE2EDuration="2.457631687s" podCreationTimestamp="2026-01-22 10:56:52 +0000 UTC" firstStartedPulling="2026-01-22 10:56:53.229754155 +0000 UTC m=+1892.459697083" lastFinishedPulling="2026-01-22 10:56:53.640358787 +0000 UTC m=+1892.870301695" observedRunningTime="2026-01-22 10:56:54.45695889 +0000 UTC m=+1893.686901798" watchObservedRunningTime="2026-01-22 10:56:54.457631687 +0000 UTC m=+1893.687574595" Jan 22 10:56:56 crc kubenswrapper[4752]: I0122 10:56:56.047474 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-b9sfs"] Jan 22 10:56:56 crc kubenswrapper[4752]: I0122 10:56:56.063804 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-b9sfs"] Jan 22 10:56:57 crc kubenswrapper[4752]: I0122 10:56:57.109866 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3d61b3d-9898-4e80-9ab1-9693bb60fcbe" path="/var/lib/kubelet/pods/b3d61b3d-9898-4e80-9ab1-9693bb60fcbe/volumes" Jan 22 10:57:02 crc kubenswrapper[4752]: I0122 10:57:02.029488 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-clql4"] Jan 22 10:57:02 crc kubenswrapper[4752]: I0122 10:57:02.038912 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-clql4"] Jan 22 10:57:02 crc kubenswrapper[4752]: I0122 10:57:02.098559 4752 scope.go:117] "RemoveContainer" containerID="95f49b8d3b2a29d6e49e059f56ddb1725ec652d91fd4d10fdf2aa8c455daa877" Jan 22 10:57:02 crc kubenswrapper[4752]: E0122 10:57:02.098908 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 10:57:03 crc kubenswrapper[4752]: I0122 10:57:03.109086 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93e19f28-ee06-4011-99f5-76be05faf55f" path="/var/lib/kubelet/pods/93e19f28-ee06-4011-99f5-76be05faf55f/volumes" Jan 22 10:57:13 crc kubenswrapper[4752]: I0122 10:57:13.098605 4752 scope.go:117] "RemoveContainer" containerID="95f49b8d3b2a29d6e49e059f56ddb1725ec652d91fd4d10fdf2aa8c455daa877" Jan 22 10:57:13 crc kubenswrapper[4752]: E0122 10:57:13.099509 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 10:57:27 crc kubenswrapper[4752]: I0122 10:57:27.098629 4752 scope.go:117] "RemoveContainer" containerID="95f49b8d3b2a29d6e49e059f56ddb1725ec652d91fd4d10fdf2aa8c455daa877" Jan 22 10:57:27 crc kubenswrapper[4752]: E0122 10:57:27.099299 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 10:57:38 crc kubenswrapper[4752]: I0122 10:57:38.098794 4752 scope.go:117] "RemoveContainer" containerID="95f49b8d3b2a29d6e49e059f56ddb1725ec652d91fd4d10fdf2aa8c455daa877" Jan 22 10:57:38 crc kubenswrapper[4752]: E0122 10:57:38.099696 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 10:57:38 crc kubenswrapper[4752]: I0122 10:57:38.900206 4752 generic.go:334] "Generic (PLEG): container finished" podID="dfcba2e1-e626-4899-8fa9-22e0e93d561f" containerID="20347c3b9766bacecd27be399ad5dee95474766be9138e29afbd926b5793dc0f" exitCode=0 Jan 22 10:57:38 crc kubenswrapper[4752]: I0122 10:57:38.900251 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89xm9" event={"ID":"dfcba2e1-e626-4899-8fa9-22e0e93d561f","Type":"ContainerDied","Data":"20347c3b9766bacecd27be399ad5dee95474766be9138e29afbd926b5793dc0f"} Jan 22 10:57:40 crc kubenswrapper[4752]: I0122 10:57:40.316902 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89xm9" Jan 22 10:57:40 crc kubenswrapper[4752]: I0122 10:57:40.449823 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfcba2e1-e626-4899-8fa9-22e0e93d561f-inventory\") pod \"dfcba2e1-e626-4899-8fa9-22e0e93d561f\" (UID: \"dfcba2e1-e626-4899-8fa9-22e0e93d561f\") " Jan 22 10:57:40 crc kubenswrapper[4752]: I0122 10:57:40.449902 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfcba2e1-e626-4899-8fa9-22e0e93d561f-ssh-key-openstack-edpm-ipam\") pod \"dfcba2e1-e626-4899-8fa9-22e0e93d561f\" (UID: \"dfcba2e1-e626-4899-8fa9-22e0e93d561f\") " Jan 22 10:57:40 crc kubenswrapper[4752]: I0122 10:57:40.449996 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jds6\" (UniqueName: \"kubernetes.io/projected/dfcba2e1-e626-4899-8fa9-22e0e93d561f-kube-api-access-4jds6\") pod \"dfcba2e1-e626-4899-8fa9-22e0e93d561f\" (UID: \"dfcba2e1-e626-4899-8fa9-22e0e93d561f\") " Jan 22 10:57:40 crc kubenswrapper[4752]: I0122 10:57:40.454966 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfcba2e1-e626-4899-8fa9-22e0e93d561f-kube-api-access-4jds6" (OuterVolumeSpecName: "kube-api-access-4jds6") pod "dfcba2e1-e626-4899-8fa9-22e0e93d561f" (UID: "dfcba2e1-e626-4899-8fa9-22e0e93d561f"). InnerVolumeSpecName "kube-api-access-4jds6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:57:40 crc kubenswrapper[4752]: I0122 10:57:40.474437 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfcba2e1-e626-4899-8fa9-22e0e93d561f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dfcba2e1-e626-4899-8fa9-22e0e93d561f" (UID: "dfcba2e1-e626-4899-8fa9-22e0e93d561f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:40 crc kubenswrapper[4752]: I0122 10:57:40.487103 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfcba2e1-e626-4899-8fa9-22e0e93d561f-inventory" (OuterVolumeSpecName: "inventory") pod "dfcba2e1-e626-4899-8fa9-22e0e93d561f" (UID: "dfcba2e1-e626-4899-8fa9-22e0e93d561f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:57:40 crc kubenswrapper[4752]: I0122 10:57:40.551881 4752 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfcba2e1-e626-4899-8fa9-22e0e93d561f-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:40 crc kubenswrapper[4752]: I0122 10:57:40.551912 4752 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfcba2e1-e626-4899-8fa9-22e0e93d561f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:40 crc kubenswrapper[4752]: I0122 10:57:40.551922 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jds6\" (UniqueName: \"kubernetes.io/projected/dfcba2e1-e626-4899-8fa9-22e0e93d561f-kube-api-access-4jds6\") on node \"crc\" DevicePath \"\"" Jan 22 10:57:40 crc kubenswrapper[4752]: I0122 10:57:40.930499 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89xm9" event={"ID":"dfcba2e1-e626-4899-8fa9-22e0e93d561f","Type":"ContainerDied","Data":"22300c92a67b31e327a3d653eef685ca306602221099fcac4b173001f21d23ab"} Jan 22 10:57:40 crc kubenswrapper[4752]: I0122 10:57:40.930555 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22300c92a67b31e327a3d653eef685ca306602221099fcac4b173001f21d23ab" Jan 22 10:57:40 crc kubenswrapper[4752]: I0122 10:57:40.930618 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89xm9" Jan 22 10:57:41 crc kubenswrapper[4752]: I0122 10:57:41.016436 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2jbnm"] Jan 22 10:57:41 crc kubenswrapper[4752]: E0122 10:57:41.016824 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfcba2e1-e626-4899-8fa9-22e0e93d561f" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 22 10:57:41 crc kubenswrapper[4752]: I0122 10:57:41.016841 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfcba2e1-e626-4899-8fa9-22e0e93d561f" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 22 10:57:41 crc kubenswrapper[4752]: I0122 10:57:41.017067 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfcba2e1-e626-4899-8fa9-22e0e93d561f" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 22 10:57:41 crc kubenswrapper[4752]: I0122 10:57:41.017683 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2jbnm" Jan 22 10:57:41 crc kubenswrapper[4752]: I0122 10:57:41.020414 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 10:57:41 crc kubenswrapper[4752]: I0122 10:57:41.020484 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 10:57:41 crc kubenswrapper[4752]: I0122 10:57:41.020833 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 10:57:41 crc kubenswrapper[4752]: I0122 10:57:41.020913 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j8c6q" Jan 22 10:57:41 crc kubenswrapper[4752]: I0122 10:57:41.042212 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2jbnm"] Jan 22 10:57:41 crc kubenswrapper[4752]: I0122 10:57:41.161614 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwdch\" (UniqueName: \"kubernetes.io/projected/3b471fa4-c0c1-4d37-b313-5873d06f3dfc-kube-api-access-dwdch\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2jbnm\" (UID: \"3b471fa4-c0c1-4d37-b313-5873d06f3dfc\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2jbnm" Jan 22 10:57:41 crc kubenswrapper[4752]: I0122 10:57:41.161683 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b471fa4-c0c1-4d37-b313-5873d06f3dfc-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2jbnm\" (UID: \"3b471fa4-c0c1-4d37-b313-5873d06f3dfc\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2jbnm" Jan 22 10:57:41 crc kubenswrapper[4752]: I0122 10:57:41.162065 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b471fa4-c0c1-4d37-b313-5873d06f3dfc-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2jbnm\" (UID: \"3b471fa4-c0c1-4d37-b313-5873d06f3dfc\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2jbnm" Jan 22 10:57:41 crc kubenswrapper[4752]: I0122 10:57:41.263555 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b471fa4-c0c1-4d37-b313-5873d06f3dfc-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2jbnm\" (UID: \"3b471fa4-c0c1-4d37-b313-5873d06f3dfc\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2jbnm" Jan 22 10:57:41 crc kubenswrapper[4752]: I0122 10:57:41.263680 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwdch\" (UniqueName: \"kubernetes.io/projected/3b471fa4-c0c1-4d37-b313-5873d06f3dfc-kube-api-access-dwdch\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2jbnm\" (UID: \"3b471fa4-c0c1-4d37-b313-5873d06f3dfc\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2jbnm" Jan 22 10:57:41 crc kubenswrapper[4752]: I0122 10:57:41.263732 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b471fa4-c0c1-4d37-b313-5873d06f3dfc-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2jbnm\" (UID: \"3b471fa4-c0c1-4d37-b313-5873d06f3dfc\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2jbnm" Jan 22 10:57:41 crc kubenswrapper[4752]: I0122 10:57:41.266924 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b471fa4-c0c1-4d37-b313-5873d06f3dfc-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2jbnm\" (UID: \"3b471fa4-c0c1-4d37-b313-5873d06f3dfc\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2jbnm" Jan 22 10:57:41 crc kubenswrapper[4752]: I0122 10:57:41.267683 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b471fa4-c0c1-4d37-b313-5873d06f3dfc-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2jbnm\" (UID: \"3b471fa4-c0c1-4d37-b313-5873d06f3dfc\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2jbnm" Jan 22 10:57:41 crc kubenswrapper[4752]: I0122 10:57:41.280111 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwdch\" (UniqueName: \"kubernetes.io/projected/3b471fa4-c0c1-4d37-b313-5873d06f3dfc-kube-api-access-dwdch\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2jbnm\" (UID: \"3b471fa4-c0c1-4d37-b313-5873d06f3dfc\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2jbnm" Jan 22 10:57:41 crc kubenswrapper[4752]: I0122 10:57:41.337812 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2jbnm" Jan 22 10:57:41 crc kubenswrapper[4752]: I0122 10:57:41.864724 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2jbnm"] Jan 22 10:57:41 crc kubenswrapper[4752]: I0122 10:57:41.940671 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2jbnm" event={"ID":"3b471fa4-c0c1-4d37-b313-5873d06f3dfc","Type":"ContainerStarted","Data":"58e290d101d9ed76534b18f5706056ebc81181325b43d8cefbccc063185df9f0"} Jan 22 10:57:42 crc kubenswrapper[4752]: I0122 10:57:42.037782 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-ftcrp"] Jan 22 10:57:42 crc kubenswrapper[4752]: I0122 10:57:42.047675 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-ftcrp"] Jan 22 10:57:42 crc kubenswrapper[4752]: I0122 10:57:42.952793 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2jbnm" event={"ID":"3b471fa4-c0c1-4d37-b313-5873d06f3dfc","Type":"ContainerStarted","Data":"cb37c8d49ddddf8522145c86dfb50767540ae842f985ec86d235f0a478198d3c"} Jan 22 10:57:42 crc kubenswrapper[4752]: I0122 10:57:42.977486 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2jbnm" podStartSLOduration=2.336555995 podStartE2EDuration="2.977467903s" podCreationTimestamp="2026-01-22 10:57:40 +0000 UTC" firstStartedPulling="2026-01-22 10:57:41.881605622 +0000 UTC m=+1941.111548530" lastFinishedPulling="2026-01-22 10:57:42.52251752 +0000 UTC m=+1941.752460438" observedRunningTime="2026-01-22 10:57:42.96903698 +0000 UTC m=+1942.198979898" watchObservedRunningTime="2026-01-22 10:57:42.977467903 +0000 UTC m=+1942.207410811" Jan 22 10:57:43 crc kubenswrapper[4752]: I0122 10:57:43.117947 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f13d789-0b3a-4c60-85f5-0e7d02610526" path="/var/lib/kubelet/pods/1f13d789-0b3a-4c60-85f5-0e7d02610526/volumes" Jan 22 10:57:46 crc kubenswrapper[4752]: I0122 10:57:46.075796 4752 scope.go:117] "RemoveContainer" containerID="bbcc8a94edc9a1e6d721521a1e7e5ade57527721e1f2e72f99f26fc66ce68656" Jan 22 10:57:46 crc kubenswrapper[4752]: I0122 10:57:46.129557 4752 scope.go:117] "RemoveContainer" containerID="b1256696a149ffb295c59cf8235f48341acf4a84955f640e0b22d0d352fc70fc" Jan 22 10:57:46 crc kubenswrapper[4752]: I0122 10:57:46.193138 4752 scope.go:117] "RemoveContainer" containerID="a3d670a343fa7cc0af2d23aa6d17b844e314e9908c3a821aecedab20cca99cd5" Jan 22 10:57:52 crc kubenswrapper[4752]: I0122 10:57:52.099172 4752 scope.go:117] "RemoveContainer" containerID="95f49b8d3b2a29d6e49e059f56ddb1725ec652d91fd4d10fdf2aa8c455daa877" Jan 22 10:57:52 crc kubenswrapper[4752]: E0122 10:57:52.100960 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 10:58:07 crc kubenswrapper[4752]: I0122 10:58:07.098258 4752 scope.go:117] "RemoveContainer" containerID="95f49b8d3b2a29d6e49e059f56ddb1725ec652d91fd4d10fdf2aa8c455daa877" Jan 22 10:58:07 crc kubenswrapper[4752]: E0122 10:58:07.099060 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 10:58:19 crc kubenswrapper[4752]: I0122 10:58:19.097894 4752 scope.go:117] "RemoveContainer" containerID="95f49b8d3b2a29d6e49e059f56ddb1725ec652d91fd4d10fdf2aa8c455daa877" Jan 22 10:58:19 crc kubenswrapper[4752]: E0122 10:58:19.098696 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 10:58:31 crc kubenswrapper[4752]: I0122 10:58:31.105791 4752 scope.go:117] "RemoveContainer" containerID="95f49b8d3b2a29d6e49e059f56ddb1725ec652d91fd4d10fdf2aa8c455daa877" Jan 22 10:58:31 crc kubenswrapper[4752]: I0122 10:58:31.734071 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cdc5r"] Jan 22 10:58:31 crc kubenswrapper[4752]: I0122 10:58:31.736861 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdc5r" Jan 22 10:58:31 crc kubenswrapper[4752]: I0122 10:58:31.757937 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cdc5r"] Jan 22 10:58:31 crc kubenswrapper[4752]: I0122 10:58:31.803288 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6xbf\" (UniqueName: \"kubernetes.io/projected/727da93c-0026-4fd8-ba89-30257de974d4-kube-api-access-r6xbf\") pod \"redhat-operators-cdc5r\" (UID: \"727da93c-0026-4fd8-ba89-30257de974d4\") " pod="openshift-marketplace/redhat-operators-cdc5r" Jan 22 10:58:31 crc kubenswrapper[4752]: I0122 10:58:31.803536 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/727da93c-0026-4fd8-ba89-30257de974d4-catalog-content\") pod \"redhat-operators-cdc5r\" (UID: \"727da93c-0026-4fd8-ba89-30257de974d4\") " pod="openshift-marketplace/redhat-operators-cdc5r" Jan 22 10:58:31 crc kubenswrapper[4752]: I0122 10:58:31.803596 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/727da93c-0026-4fd8-ba89-30257de974d4-utilities\") pod \"redhat-operators-cdc5r\" (UID: \"727da93c-0026-4fd8-ba89-30257de974d4\") " pod="openshift-marketplace/redhat-operators-cdc5r" Jan 22 10:58:31 crc kubenswrapper[4752]: I0122 10:58:31.905311 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6xbf\" (UniqueName: \"kubernetes.io/projected/727da93c-0026-4fd8-ba89-30257de974d4-kube-api-access-r6xbf\") pod \"redhat-operators-cdc5r\" (UID: \"727da93c-0026-4fd8-ba89-30257de974d4\") " pod="openshift-marketplace/redhat-operators-cdc5r" Jan 22 10:58:31 crc kubenswrapper[4752]: I0122 10:58:31.905442 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/727da93c-0026-4fd8-ba89-30257de974d4-catalog-content\") pod \"redhat-operators-cdc5r\" (UID: \"727da93c-0026-4fd8-ba89-30257de974d4\") " pod="openshift-marketplace/redhat-operators-cdc5r" Jan 22 10:58:31 crc kubenswrapper[4752]: I0122 10:58:31.905472 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/727da93c-0026-4fd8-ba89-30257de974d4-utilities\") pod \"redhat-operators-cdc5r\" (UID: \"727da93c-0026-4fd8-ba89-30257de974d4\") " pod="openshift-marketplace/redhat-operators-cdc5r" Jan 22 10:58:31 crc kubenswrapper[4752]: I0122 10:58:31.906093 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/727da93c-0026-4fd8-ba89-30257de974d4-utilities\") pod \"redhat-operators-cdc5r\" (UID: \"727da93c-0026-4fd8-ba89-30257de974d4\") " pod="openshift-marketplace/redhat-operators-cdc5r" Jan 22 10:58:31 crc kubenswrapper[4752]: I0122 10:58:31.906365 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/727da93c-0026-4fd8-ba89-30257de974d4-catalog-content\") pod \"redhat-operators-cdc5r\" (UID: \"727da93c-0026-4fd8-ba89-30257de974d4\") " pod="openshift-marketplace/redhat-operators-cdc5r" Jan 22 10:58:31 crc kubenswrapper[4752]: I0122 10:58:31.949609 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6xbf\" (UniqueName: \"kubernetes.io/projected/727da93c-0026-4fd8-ba89-30257de974d4-kube-api-access-r6xbf\") pod \"redhat-operators-cdc5r\" (UID: \"727da93c-0026-4fd8-ba89-30257de974d4\") " pod="openshift-marketplace/redhat-operators-cdc5r" Jan 22 10:58:31 crc kubenswrapper[4752]: I0122 10:58:31.968081 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerStarted","Data":"70cbafe9440260ad6827dda2f3848d07643b15d146fa166af3af676a25a18d68"} Jan 22 10:58:32 crc kubenswrapper[4752]: I0122 10:58:32.070242 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdc5r" Jan 22 10:58:32 crc kubenswrapper[4752]: I0122 10:58:32.587828 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cdc5r"] Jan 22 10:58:32 crc kubenswrapper[4752]: I0122 10:58:32.979110 4752 generic.go:334] "Generic (PLEG): container finished" podID="727da93c-0026-4fd8-ba89-30257de974d4" containerID="ed12b6295d7483e379f4396e2d4c5351d637510a5eff153f56a6a7a6b7c92968" exitCode=0 Jan 22 10:58:32 crc kubenswrapper[4752]: I0122 10:58:32.979351 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdc5r" event={"ID":"727da93c-0026-4fd8-ba89-30257de974d4","Type":"ContainerDied","Data":"ed12b6295d7483e379f4396e2d4c5351d637510a5eff153f56a6a7a6b7c92968"} Jan 22 10:58:32 crc kubenswrapper[4752]: I0122 10:58:32.979964 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdc5r" event={"ID":"727da93c-0026-4fd8-ba89-30257de974d4","Type":"ContainerStarted","Data":"0cc45d9da6bbef20d0711f80924b797db4de3f346221f0f8d1acd30a5808f9da"} Jan 22 10:58:34 crc kubenswrapper[4752]: I0122 10:58:34.998547 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdc5r" event={"ID":"727da93c-0026-4fd8-ba89-30257de974d4","Type":"ContainerStarted","Data":"f6bc131457b4ed229b8e45311e5d1b751b95eacd1fae04243f8ea240d0de8a7e"} Jan 22 10:58:36 crc kubenswrapper[4752]: I0122 10:58:36.009723 4752 generic.go:334] "Generic (PLEG): container finished" podID="727da93c-0026-4fd8-ba89-30257de974d4" containerID="f6bc131457b4ed229b8e45311e5d1b751b95eacd1fae04243f8ea240d0de8a7e" exitCode=0 Jan 22 10:58:36 crc kubenswrapper[4752]: I0122 10:58:36.009866 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdc5r" event={"ID":"727da93c-0026-4fd8-ba89-30257de974d4","Type":"ContainerDied","Data":"f6bc131457b4ed229b8e45311e5d1b751b95eacd1fae04243f8ea240d0de8a7e"} Jan 22 10:58:38 crc kubenswrapper[4752]: I0122 10:58:38.032130 4752 generic.go:334] "Generic (PLEG): container finished" podID="3b471fa4-c0c1-4d37-b313-5873d06f3dfc" containerID="cb37c8d49ddddf8522145c86dfb50767540ae842f985ec86d235f0a478198d3c" exitCode=0 Jan 22 10:58:38 crc kubenswrapper[4752]: I0122 10:58:38.032218 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2jbnm" event={"ID":"3b471fa4-c0c1-4d37-b313-5873d06f3dfc","Type":"ContainerDied","Data":"cb37c8d49ddddf8522145c86dfb50767540ae842f985ec86d235f0a478198d3c"} Jan 22 10:58:38 crc kubenswrapper[4752]: I0122 10:58:38.035713 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdc5r" event={"ID":"727da93c-0026-4fd8-ba89-30257de974d4","Type":"ContainerStarted","Data":"3ef42848dc08b7a48e7d2fd2b97b2467fda391b892e0efe4befac0bbf7c98d84"} Jan 22 10:58:38 crc kubenswrapper[4752]: I0122 10:58:38.076421 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cdc5r" podStartSLOduration=2.258675062 podStartE2EDuration="7.07639269s" podCreationTimestamp="2026-01-22 10:58:31 +0000 UTC" firstStartedPulling="2026-01-22 10:58:32.981723056 +0000 UTC m=+1992.211665964" lastFinishedPulling="2026-01-22 10:58:37.799440684 +0000 UTC m=+1997.029383592" observedRunningTime="2026-01-22 10:58:38.073085813 +0000 UTC m=+1997.303028721" watchObservedRunningTime="2026-01-22 10:58:38.07639269 +0000 UTC m=+1997.306335608" Jan 22 10:58:39 crc kubenswrapper[4752]: I0122 10:58:39.481609 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2jbnm" Jan 22 10:58:39 crc kubenswrapper[4752]: I0122 10:58:39.574398 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b471fa4-c0c1-4d37-b313-5873d06f3dfc-ssh-key-openstack-edpm-ipam\") pod \"3b471fa4-c0c1-4d37-b313-5873d06f3dfc\" (UID: \"3b471fa4-c0c1-4d37-b313-5873d06f3dfc\") " Jan 22 10:58:39 crc kubenswrapper[4752]: I0122 10:58:39.574457 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b471fa4-c0c1-4d37-b313-5873d06f3dfc-inventory\") pod \"3b471fa4-c0c1-4d37-b313-5873d06f3dfc\" (UID: \"3b471fa4-c0c1-4d37-b313-5873d06f3dfc\") " Jan 22 10:58:39 crc kubenswrapper[4752]: I0122 10:58:39.574501 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwdch\" (UniqueName: \"kubernetes.io/projected/3b471fa4-c0c1-4d37-b313-5873d06f3dfc-kube-api-access-dwdch\") pod \"3b471fa4-c0c1-4d37-b313-5873d06f3dfc\" (UID: \"3b471fa4-c0c1-4d37-b313-5873d06f3dfc\") " Jan 22 10:58:39 crc kubenswrapper[4752]: I0122 10:58:39.596083 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b471fa4-c0c1-4d37-b313-5873d06f3dfc-kube-api-access-dwdch" (OuterVolumeSpecName: "kube-api-access-dwdch") pod "3b471fa4-c0c1-4d37-b313-5873d06f3dfc" (UID: "3b471fa4-c0c1-4d37-b313-5873d06f3dfc"). InnerVolumeSpecName "kube-api-access-dwdch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:58:39 crc kubenswrapper[4752]: I0122 10:58:39.610407 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b471fa4-c0c1-4d37-b313-5873d06f3dfc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3b471fa4-c0c1-4d37-b313-5873d06f3dfc" (UID: "3b471fa4-c0c1-4d37-b313-5873d06f3dfc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:58:39 crc kubenswrapper[4752]: I0122 10:58:39.613136 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b471fa4-c0c1-4d37-b313-5873d06f3dfc-inventory" (OuterVolumeSpecName: "inventory") pod "3b471fa4-c0c1-4d37-b313-5873d06f3dfc" (UID: "3b471fa4-c0c1-4d37-b313-5873d06f3dfc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:58:39 crc kubenswrapper[4752]: I0122 10:58:39.677400 4752 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b471fa4-c0c1-4d37-b313-5873d06f3dfc-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:39 crc kubenswrapper[4752]: I0122 10:58:39.677452 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwdch\" (UniqueName: \"kubernetes.io/projected/3b471fa4-c0c1-4d37-b313-5873d06f3dfc-kube-api-access-dwdch\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:39 crc kubenswrapper[4752]: I0122 10:58:39.677465 4752 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b471fa4-c0c1-4d37-b313-5873d06f3dfc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:40 crc kubenswrapper[4752]: I0122 10:58:40.051646 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2jbnm" event={"ID":"3b471fa4-c0c1-4d37-b313-5873d06f3dfc","Type":"ContainerDied","Data":"58e290d101d9ed76534b18f5706056ebc81181325b43d8cefbccc063185df9f0"} Jan 22 10:58:40 crc kubenswrapper[4752]: I0122 10:58:40.051688 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2jbnm" Jan 22 10:58:40 crc kubenswrapper[4752]: I0122 10:58:40.051690 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58e290d101d9ed76534b18f5706056ebc81181325b43d8cefbccc063185df9f0" Jan 22 10:58:40 crc kubenswrapper[4752]: I0122 10:58:40.158771 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cz7jr"] Jan 22 10:58:40 crc kubenswrapper[4752]: E0122 10:58:40.159350 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b471fa4-c0c1-4d37-b313-5873d06f3dfc" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 22 10:58:40 crc kubenswrapper[4752]: I0122 10:58:40.159371 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b471fa4-c0c1-4d37-b313-5873d06f3dfc" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 22 10:58:40 crc kubenswrapper[4752]: I0122 10:58:40.159639 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b471fa4-c0c1-4d37-b313-5873d06f3dfc" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 22 10:58:40 crc kubenswrapper[4752]: I0122 10:58:40.160486 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cz7jr" Jan 22 10:58:40 crc kubenswrapper[4752]: I0122 10:58:40.163785 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 10:58:40 crc kubenswrapper[4752]: I0122 10:58:40.164091 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 10:58:40 crc kubenswrapper[4752]: I0122 10:58:40.164216 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 10:58:40 crc kubenswrapper[4752]: I0122 10:58:40.165148 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j8c6q" Jan 22 10:58:40 crc kubenswrapper[4752]: I0122 10:58:40.168583 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cz7jr"] Jan 22 10:58:40 crc kubenswrapper[4752]: I0122 10:58:40.287323 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/775ef112-e9bf-4c6a-a893-6916c3efaa4d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cz7jr\" (UID: \"775ef112-e9bf-4c6a-a893-6916c3efaa4d\") " pod="openstack/ssh-known-hosts-edpm-deployment-cz7jr" Jan 22 10:58:40 crc kubenswrapper[4752]: I0122 10:58:40.287407 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4btm\" (UniqueName: \"kubernetes.io/projected/775ef112-e9bf-4c6a-a893-6916c3efaa4d-kube-api-access-h4btm\") pod \"ssh-known-hosts-edpm-deployment-cz7jr\" (UID: \"775ef112-e9bf-4c6a-a893-6916c3efaa4d\") " pod="openstack/ssh-known-hosts-edpm-deployment-cz7jr" Jan 22 10:58:40 crc kubenswrapper[4752]: I0122 10:58:40.287629 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/775ef112-e9bf-4c6a-a893-6916c3efaa4d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cz7jr\" (UID: \"775ef112-e9bf-4c6a-a893-6916c3efaa4d\") " pod="openstack/ssh-known-hosts-edpm-deployment-cz7jr" Jan 22 10:58:40 crc kubenswrapper[4752]: I0122 10:58:40.390161 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/775ef112-e9bf-4c6a-a893-6916c3efaa4d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cz7jr\" (UID: \"775ef112-e9bf-4c6a-a893-6916c3efaa4d\") " pod="openstack/ssh-known-hosts-edpm-deployment-cz7jr" Jan 22 10:58:40 crc kubenswrapper[4752]: I0122 10:58:40.390228 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4btm\" (UniqueName: \"kubernetes.io/projected/775ef112-e9bf-4c6a-a893-6916c3efaa4d-kube-api-access-h4btm\") pod \"ssh-known-hosts-edpm-deployment-cz7jr\" (UID: \"775ef112-e9bf-4c6a-a893-6916c3efaa4d\") " pod="openstack/ssh-known-hosts-edpm-deployment-cz7jr" Jan 22 10:58:40 crc kubenswrapper[4752]: I0122 10:58:40.390293 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/775ef112-e9bf-4c6a-a893-6916c3efaa4d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cz7jr\" (UID: \"775ef112-e9bf-4c6a-a893-6916c3efaa4d\") " pod="openstack/ssh-known-hosts-edpm-deployment-cz7jr" Jan 22 10:58:40 crc kubenswrapper[4752]: I0122 10:58:40.403476 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/775ef112-e9bf-4c6a-a893-6916c3efaa4d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cz7jr\" (UID: \"775ef112-e9bf-4c6a-a893-6916c3efaa4d\") " pod="openstack/ssh-known-hosts-edpm-deployment-cz7jr" Jan 22 10:58:40 crc kubenswrapper[4752]: I0122 10:58:40.404145 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/775ef112-e9bf-4c6a-a893-6916c3efaa4d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cz7jr\" (UID: \"775ef112-e9bf-4c6a-a893-6916c3efaa4d\") " pod="openstack/ssh-known-hosts-edpm-deployment-cz7jr" Jan 22 10:58:40 crc kubenswrapper[4752]: I0122 10:58:40.411378 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4btm\" (UniqueName: \"kubernetes.io/projected/775ef112-e9bf-4c6a-a893-6916c3efaa4d-kube-api-access-h4btm\") pod \"ssh-known-hosts-edpm-deployment-cz7jr\" (UID: \"775ef112-e9bf-4c6a-a893-6916c3efaa4d\") " pod="openstack/ssh-known-hosts-edpm-deployment-cz7jr" Jan 22 10:58:40 crc kubenswrapper[4752]: I0122 10:58:40.480039 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cz7jr" Jan 22 10:58:41 crc kubenswrapper[4752]: I0122 10:58:41.032619 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cz7jr"] Jan 22 10:58:41 crc kubenswrapper[4752]: W0122 10:58:41.042344 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod775ef112_e9bf_4c6a_a893_6916c3efaa4d.slice/crio-f717a9dd200e03a7b59b6ea43e14c74b582e68e230ab1b7485c1a8ee77258173 WatchSource:0}: Error finding container f717a9dd200e03a7b59b6ea43e14c74b582e68e230ab1b7485c1a8ee77258173: Status 404 returned error can't find the container with id f717a9dd200e03a7b59b6ea43e14c74b582e68e230ab1b7485c1a8ee77258173 Jan 22 10:58:41 crc kubenswrapper[4752]: I0122 10:58:41.069763 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cz7jr" event={"ID":"775ef112-e9bf-4c6a-a893-6916c3efaa4d","Type":"ContainerStarted","Data":"f717a9dd200e03a7b59b6ea43e14c74b582e68e230ab1b7485c1a8ee77258173"} Jan 22 10:58:42 crc kubenswrapper[4752]: I0122 10:58:42.074756 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cdc5r" Jan 22 10:58:42 crc kubenswrapper[4752]: I0122 10:58:42.075068 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cdc5r" Jan 22 10:58:43 crc kubenswrapper[4752]: I0122 10:58:43.108639 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cz7jr" event={"ID":"775ef112-e9bf-4c6a-a893-6916c3efaa4d","Type":"ContainerStarted","Data":"891adf60443b0e899e5a3522a855c611103e2352da64030d23c2918c76b4e2d5"} Jan 22 10:58:43 crc kubenswrapper[4752]: I0122 10:58:43.137394 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cdc5r" podUID="727da93c-0026-4fd8-ba89-30257de974d4" containerName="registry-server" probeResult="failure" output=< Jan 22 10:58:43 crc kubenswrapper[4752]: timeout: failed to connect service ":50051" within 1s Jan 22 10:58:43 crc kubenswrapper[4752]: > Jan 22 10:58:50 crc kubenswrapper[4752]: I0122 10:58:50.166062 4752 generic.go:334] "Generic (PLEG): container finished" podID="775ef112-e9bf-4c6a-a893-6916c3efaa4d" containerID="891adf60443b0e899e5a3522a855c611103e2352da64030d23c2918c76b4e2d5" exitCode=0 Jan 22 10:58:50 crc kubenswrapper[4752]: I0122 10:58:50.166140 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cz7jr" event={"ID":"775ef112-e9bf-4c6a-a893-6916c3efaa4d","Type":"ContainerDied","Data":"891adf60443b0e899e5a3522a855c611103e2352da64030d23c2918c76b4e2d5"} Jan 22 10:58:51 crc kubenswrapper[4752]: I0122 10:58:51.723415 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cz7jr" Jan 22 10:58:51 crc kubenswrapper[4752]: I0122 10:58:51.822794 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4btm\" (UniqueName: \"kubernetes.io/projected/775ef112-e9bf-4c6a-a893-6916c3efaa4d-kube-api-access-h4btm\") pod \"775ef112-e9bf-4c6a-a893-6916c3efaa4d\" (UID: \"775ef112-e9bf-4c6a-a893-6916c3efaa4d\") " Jan 22 10:58:51 crc kubenswrapper[4752]: I0122 10:58:51.823252 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/775ef112-e9bf-4c6a-a893-6916c3efaa4d-inventory-0\") pod \"775ef112-e9bf-4c6a-a893-6916c3efaa4d\" (UID: \"775ef112-e9bf-4c6a-a893-6916c3efaa4d\") " Jan 22 10:58:51 crc kubenswrapper[4752]: I0122 10:58:51.823389 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/775ef112-e9bf-4c6a-a893-6916c3efaa4d-ssh-key-openstack-edpm-ipam\") pod \"775ef112-e9bf-4c6a-a893-6916c3efaa4d\" (UID: \"775ef112-e9bf-4c6a-a893-6916c3efaa4d\") " Jan 22 10:58:51 crc kubenswrapper[4752]: I0122 10:58:51.837151 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/775ef112-e9bf-4c6a-a893-6916c3efaa4d-kube-api-access-h4btm" (OuterVolumeSpecName: "kube-api-access-h4btm") pod "775ef112-e9bf-4c6a-a893-6916c3efaa4d" (UID: "775ef112-e9bf-4c6a-a893-6916c3efaa4d"). InnerVolumeSpecName "kube-api-access-h4btm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:58:51 crc kubenswrapper[4752]: I0122 10:58:51.863328 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/775ef112-e9bf-4c6a-a893-6916c3efaa4d-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "775ef112-e9bf-4c6a-a893-6916c3efaa4d" (UID: "775ef112-e9bf-4c6a-a893-6916c3efaa4d"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:58:51 crc kubenswrapper[4752]: I0122 10:58:51.863446 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/775ef112-e9bf-4c6a-a893-6916c3efaa4d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "775ef112-e9bf-4c6a-a893-6916c3efaa4d" (UID: "775ef112-e9bf-4c6a-a893-6916c3efaa4d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:58:51 crc kubenswrapper[4752]: I0122 10:58:51.928385 4752 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/775ef112-e9bf-4c6a-a893-6916c3efaa4d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:51 crc kubenswrapper[4752]: I0122 10:58:51.928439 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4btm\" (UniqueName: \"kubernetes.io/projected/775ef112-e9bf-4c6a-a893-6916c3efaa4d-kube-api-access-h4btm\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:51 crc kubenswrapper[4752]: I0122 10:58:51.928456 4752 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/775ef112-e9bf-4c6a-a893-6916c3efaa4d-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:52 crc kubenswrapper[4752]: I0122 10:58:52.118025 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cdc5r" Jan 22 10:58:52 crc kubenswrapper[4752]: I0122 10:58:52.178957 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cdc5r" Jan 22 10:58:52 crc kubenswrapper[4752]: I0122 10:58:52.244999 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cz7jr" Jan 22 10:58:52 crc kubenswrapper[4752]: I0122 10:58:52.245080 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cz7jr" event={"ID":"775ef112-e9bf-4c6a-a893-6916c3efaa4d","Type":"ContainerDied","Data":"f717a9dd200e03a7b59b6ea43e14c74b582e68e230ab1b7485c1a8ee77258173"} Jan 22 10:58:52 crc kubenswrapper[4752]: I0122 10:58:52.245116 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f717a9dd200e03a7b59b6ea43e14c74b582e68e230ab1b7485c1a8ee77258173" Jan 22 10:58:52 crc kubenswrapper[4752]: I0122 10:58:52.337464 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-2xlgr"] Jan 22 10:58:52 crc kubenswrapper[4752]: E0122 10:58:52.338043 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775ef112-e9bf-4c6a-a893-6916c3efaa4d" containerName="ssh-known-hosts-edpm-deployment" Jan 22 10:58:52 crc kubenswrapper[4752]: I0122 10:58:52.338073 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="775ef112-e9bf-4c6a-a893-6916c3efaa4d" containerName="ssh-known-hosts-edpm-deployment" Jan 22 10:58:52 crc kubenswrapper[4752]: I0122 10:58:52.338381 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="775ef112-e9bf-4c6a-a893-6916c3efaa4d" containerName="ssh-known-hosts-edpm-deployment" Jan 22 10:58:52 crc kubenswrapper[4752]: I0122 10:58:52.339455 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2xlgr" Jan 22 10:58:52 crc kubenswrapper[4752]: I0122 10:58:52.348114 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-2xlgr"] Jan 22 10:58:52 crc kubenswrapper[4752]: I0122 10:58:52.358663 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j8c6q" Jan 22 10:58:52 crc kubenswrapper[4752]: I0122 10:58:52.358778 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 10:58:52 crc kubenswrapper[4752]: I0122 10:58:52.358946 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 10:58:52 crc kubenswrapper[4752]: I0122 10:58:52.358970 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 10:58:52 crc kubenswrapper[4752]: I0122 10:58:52.447361 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cdc5r"] Jan 22 10:58:52 crc kubenswrapper[4752]: I0122 10:58:52.448124 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5njws\" (UniqueName: \"kubernetes.io/projected/ad07cbe6-0802-4807-84a4-5730f8698e1a-kube-api-access-5njws\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2xlgr\" (UID: \"ad07cbe6-0802-4807-84a4-5730f8698e1a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2xlgr" Jan 22 10:58:52 crc kubenswrapper[4752]: I0122 10:58:52.448227 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad07cbe6-0802-4807-84a4-5730f8698e1a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2xlgr\" (UID: \"ad07cbe6-0802-4807-84a4-5730f8698e1a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2xlgr" Jan 22 10:58:52 crc kubenswrapper[4752]: I0122 10:58:52.448267 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad07cbe6-0802-4807-84a4-5730f8698e1a-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2xlgr\" (UID: \"ad07cbe6-0802-4807-84a4-5730f8698e1a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2xlgr" Jan 22 10:58:52 crc kubenswrapper[4752]: I0122 10:58:52.550024 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad07cbe6-0802-4807-84a4-5730f8698e1a-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2xlgr\" (UID: \"ad07cbe6-0802-4807-84a4-5730f8698e1a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2xlgr" Jan 22 10:58:52 crc kubenswrapper[4752]: I0122 10:58:52.550169 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5njws\" (UniqueName: \"kubernetes.io/projected/ad07cbe6-0802-4807-84a4-5730f8698e1a-kube-api-access-5njws\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2xlgr\" (UID: \"ad07cbe6-0802-4807-84a4-5730f8698e1a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2xlgr" Jan 22 10:58:52 crc kubenswrapper[4752]: I0122 10:58:52.550282 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad07cbe6-0802-4807-84a4-5730f8698e1a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2xlgr\" (UID: \"ad07cbe6-0802-4807-84a4-5730f8698e1a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2xlgr" Jan 22 10:58:52 crc kubenswrapper[4752]: I0122 10:58:52.554721 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad07cbe6-0802-4807-84a4-5730f8698e1a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2xlgr\" (UID: \"ad07cbe6-0802-4807-84a4-5730f8698e1a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2xlgr" Jan 22 10:58:52 crc kubenswrapper[4752]: I0122 10:58:52.559536 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad07cbe6-0802-4807-84a4-5730f8698e1a-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2xlgr\" (UID: \"ad07cbe6-0802-4807-84a4-5730f8698e1a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2xlgr" Jan 22 10:58:52 crc kubenswrapper[4752]: I0122 10:58:52.570354 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5njws\" (UniqueName: \"kubernetes.io/projected/ad07cbe6-0802-4807-84a4-5730f8698e1a-kube-api-access-5njws\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2xlgr\" (UID: \"ad07cbe6-0802-4807-84a4-5730f8698e1a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2xlgr" Jan 22 10:58:52 crc kubenswrapper[4752]: I0122 10:58:52.680438 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2xlgr" Jan 22 10:58:53 crc kubenswrapper[4752]: I0122 10:58:53.213104 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-2xlgr"] Jan 22 10:58:53 crc kubenswrapper[4752]: I0122 10:58:53.256050 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cdc5r" podUID="727da93c-0026-4fd8-ba89-30257de974d4" containerName="registry-server" containerID="cri-o://3ef42848dc08b7a48e7d2fd2b97b2467fda391b892e0efe4befac0bbf7c98d84" gracePeriod=2 Jan 22 10:58:53 crc kubenswrapper[4752]: I0122 10:58:53.256343 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2xlgr" event={"ID":"ad07cbe6-0802-4807-84a4-5730f8698e1a","Type":"ContainerStarted","Data":"4ebbc0b87762b826fbe90c2131e0660b283bd99c362747097f17798ab82704f3"} Jan 22 10:58:53 crc kubenswrapper[4752]: I0122 10:58:53.817981 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdc5r" Jan 22 10:58:53 crc kubenswrapper[4752]: I0122 10:58:53.881462 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/727da93c-0026-4fd8-ba89-30257de974d4-catalog-content\") pod \"727da93c-0026-4fd8-ba89-30257de974d4\" (UID: \"727da93c-0026-4fd8-ba89-30257de974d4\") " Jan 22 10:58:53 crc kubenswrapper[4752]: I0122 10:58:53.881523 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/727da93c-0026-4fd8-ba89-30257de974d4-utilities\") pod \"727da93c-0026-4fd8-ba89-30257de974d4\" (UID: \"727da93c-0026-4fd8-ba89-30257de974d4\") " Jan 22 10:58:53 crc kubenswrapper[4752]: I0122 10:58:53.881662 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6xbf\" (UniqueName: \"kubernetes.io/projected/727da93c-0026-4fd8-ba89-30257de974d4-kube-api-access-r6xbf\") pod \"727da93c-0026-4fd8-ba89-30257de974d4\" (UID: \"727da93c-0026-4fd8-ba89-30257de974d4\") " Jan 22 10:58:53 crc kubenswrapper[4752]: I0122 10:58:53.884876 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/727da93c-0026-4fd8-ba89-30257de974d4-utilities" (OuterVolumeSpecName: "utilities") pod "727da93c-0026-4fd8-ba89-30257de974d4" (UID: "727da93c-0026-4fd8-ba89-30257de974d4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:58:53 crc kubenswrapper[4752]: I0122 10:58:53.902477 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/727da93c-0026-4fd8-ba89-30257de974d4-kube-api-access-r6xbf" (OuterVolumeSpecName: "kube-api-access-r6xbf") pod "727da93c-0026-4fd8-ba89-30257de974d4" (UID: "727da93c-0026-4fd8-ba89-30257de974d4"). InnerVolumeSpecName "kube-api-access-r6xbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:58:53 crc kubenswrapper[4752]: I0122 10:58:53.983552 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6xbf\" (UniqueName: \"kubernetes.io/projected/727da93c-0026-4fd8-ba89-30257de974d4-kube-api-access-r6xbf\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:53 crc kubenswrapper[4752]: I0122 10:58:53.983585 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/727da93c-0026-4fd8-ba89-30257de974d4-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:54 crc kubenswrapper[4752]: I0122 10:58:54.022826 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/727da93c-0026-4fd8-ba89-30257de974d4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "727da93c-0026-4fd8-ba89-30257de974d4" (UID: "727da93c-0026-4fd8-ba89-30257de974d4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 10:58:54 crc kubenswrapper[4752]: I0122 10:58:54.085830 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/727da93c-0026-4fd8-ba89-30257de974d4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 10:58:54 crc kubenswrapper[4752]: I0122 10:58:54.270370 4752 generic.go:334] "Generic (PLEG): container finished" podID="727da93c-0026-4fd8-ba89-30257de974d4" containerID="3ef42848dc08b7a48e7d2fd2b97b2467fda391b892e0efe4befac0bbf7c98d84" exitCode=0 Jan 22 10:58:54 crc kubenswrapper[4752]: I0122 10:58:54.270435 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdc5r" Jan 22 10:58:54 crc kubenswrapper[4752]: I0122 10:58:54.270488 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdc5r" event={"ID":"727da93c-0026-4fd8-ba89-30257de974d4","Type":"ContainerDied","Data":"3ef42848dc08b7a48e7d2fd2b97b2467fda391b892e0efe4befac0bbf7c98d84"} Jan 22 10:58:54 crc kubenswrapper[4752]: I0122 10:58:54.270908 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdc5r" event={"ID":"727da93c-0026-4fd8-ba89-30257de974d4","Type":"ContainerDied","Data":"0cc45d9da6bbef20d0711f80924b797db4de3f346221f0f8d1acd30a5808f9da"} Jan 22 10:58:54 crc kubenswrapper[4752]: I0122 10:58:54.270938 4752 scope.go:117] "RemoveContainer" containerID="3ef42848dc08b7a48e7d2fd2b97b2467fda391b892e0efe4befac0bbf7c98d84" Jan 22 10:58:54 crc kubenswrapper[4752]: I0122 10:58:54.272348 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2xlgr" event={"ID":"ad07cbe6-0802-4807-84a4-5730f8698e1a","Type":"ContainerStarted","Data":"525aca634b770d58b479ed20f7c138cc999d4145e1d924955f9df8fe3b641ff2"} Jan 22 10:58:54 crc kubenswrapper[4752]: I0122 10:58:54.292136 4752 scope.go:117] "RemoveContainer" containerID="f6bc131457b4ed229b8e45311e5d1b751b95eacd1fae04243f8ea240d0de8a7e" Jan 22 10:58:54 crc kubenswrapper[4752]: I0122 10:58:54.292915 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2xlgr" podStartSLOduration=1.818439416 podStartE2EDuration="2.292891602s" podCreationTimestamp="2026-01-22 10:58:52 +0000 UTC" firstStartedPulling="2026-01-22 10:58:53.220720267 +0000 UTC m=+2012.450663175" lastFinishedPulling="2026-01-22 10:58:53.695172453 +0000 UTC m=+2012.925115361" observedRunningTime="2026-01-22 10:58:54.290724675 +0000 UTC m=+2013.520667603" watchObservedRunningTime="2026-01-22 10:58:54.292891602 +0000 UTC m=+2013.522834510" Jan 22 10:58:54 crc kubenswrapper[4752]: I0122 10:58:54.322808 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cdc5r"] Jan 22 10:58:54 crc kubenswrapper[4752]: I0122 10:58:54.333492 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cdc5r"] Jan 22 10:58:54 crc kubenswrapper[4752]: I0122 10:58:54.340448 4752 scope.go:117] "RemoveContainer" containerID="ed12b6295d7483e379f4396e2d4c5351d637510a5eff153f56a6a7a6b7c92968" Jan 22 10:58:54 crc kubenswrapper[4752]: I0122 10:58:54.374485 4752 scope.go:117] "RemoveContainer" containerID="3ef42848dc08b7a48e7d2fd2b97b2467fda391b892e0efe4befac0bbf7c98d84" Jan 22 10:58:54 crc kubenswrapper[4752]: E0122 10:58:54.375080 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ef42848dc08b7a48e7d2fd2b97b2467fda391b892e0efe4befac0bbf7c98d84\": container with ID starting with 3ef42848dc08b7a48e7d2fd2b97b2467fda391b892e0efe4befac0bbf7c98d84 not found: ID does not exist" containerID="3ef42848dc08b7a48e7d2fd2b97b2467fda391b892e0efe4befac0bbf7c98d84" Jan 22 10:58:54 crc kubenswrapper[4752]: I0122 10:58:54.375124 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ef42848dc08b7a48e7d2fd2b97b2467fda391b892e0efe4befac0bbf7c98d84"} err="failed to get container status \"3ef42848dc08b7a48e7d2fd2b97b2467fda391b892e0efe4befac0bbf7c98d84\": rpc error: code = NotFound desc = could not find container \"3ef42848dc08b7a48e7d2fd2b97b2467fda391b892e0efe4befac0bbf7c98d84\": container with ID starting with 3ef42848dc08b7a48e7d2fd2b97b2467fda391b892e0efe4befac0bbf7c98d84 not found: ID does not exist" Jan 22 10:58:54 crc kubenswrapper[4752]: I0122 10:58:54.375153 4752 scope.go:117] "RemoveContainer" containerID="f6bc131457b4ed229b8e45311e5d1b751b95eacd1fae04243f8ea240d0de8a7e" Jan 22 10:58:54 crc kubenswrapper[4752]: E0122 10:58:54.375603 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6bc131457b4ed229b8e45311e5d1b751b95eacd1fae04243f8ea240d0de8a7e\": container with ID starting with f6bc131457b4ed229b8e45311e5d1b751b95eacd1fae04243f8ea240d0de8a7e not found: ID does not exist" containerID="f6bc131457b4ed229b8e45311e5d1b751b95eacd1fae04243f8ea240d0de8a7e" Jan 22 10:58:54 crc kubenswrapper[4752]: I0122 10:58:54.375625 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6bc131457b4ed229b8e45311e5d1b751b95eacd1fae04243f8ea240d0de8a7e"} err="failed to get container status \"f6bc131457b4ed229b8e45311e5d1b751b95eacd1fae04243f8ea240d0de8a7e\": rpc error: code = NotFound desc = could not find container \"f6bc131457b4ed229b8e45311e5d1b751b95eacd1fae04243f8ea240d0de8a7e\": container with ID starting with f6bc131457b4ed229b8e45311e5d1b751b95eacd1fae04243f8ea240d0de8a7e not found: ID does not exist" Jan 22 10:58:54 crc kubenswrapper[4752]: I0122 10:58:54.375643 4752 scope.go:117] "RemoveContainer" containerID="ed12b6295d7483e379f4396e2d4c5351d637510a5eff153f56a6a7a6b7c92968" Jan 22 10:58:54 crc kubenswrapper[4752]: E0122 10:58:54.375993 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed12b6295d7483e379f4396e2d4c5351d637510a5eff153f56a6a7a6b7c92968\": container with ID starting with ed12b6295d7483e379f4396e2d4c5351d637510a5eff153f56a6a7a6b7c92968 not found: ID does not exist" containerID="ed12b6295d7483e379f4396e2d4c5351d637510a5eff153f56a6a7a6b7c92968" Jan 22 10:58:54 crc kubenswrapper[4752]: I0122 10:58:54.376039 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed12b6295d7483e379f4396e2d4c5351d637510a5eff153f56a6a7a6b7c92968"} err="failed to get container status \"ed12b6295d7483e379f4396e2d4c5351d637510a5eff153f56a6a7a6b7c92968\": rpc error: code = NotFound desc = could not find container \"ed12b6295d7483e379f4396e2d4c5351d637510a5eff153f56a6a7a6b7c92968\": container with ID starting with ed12b6295d7483e379f4396e2d4c5351d637510a5eff153f56a6a7a6b7c92968 not found: ID does not exist" Jan 22 10:58:55 crc kubenswrapper[4752]: I0122 10:58:55.109670 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="727da93c-0026-4fd8-ba89-30257de974d4" path="/var/lib/kubelet/pods/727da93c-0026-4fd8-ba89-30257de974d4/volumes" Jan 22 10:59:03 crc kubenswrapper[4752]: I0122 10:59:03.365693 4752 generic.go:334] "Generic (PLEG): container finished" podID="ad07cbe6-0802-4807-84a4-5730f8698e1a" containerID="525aca634b770d58b479ed20f7c138cc999d4145e1d924955f9df8fe3b641ff2" exitCode=0 Jan 22 10:59:03 crc kubenswrapper[4752]: I0122 10:59:03.365758 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2xlgr" event={"ID":"ad07cbe6-0802-4807-84a4-5730f8698e1a","Type":"ContainerDied","Data":"525aca634b770d58b479ed20f7c138cc999d4145e1d924955f9df8fe3b641ff2"} Jan 22 10:59:04 crc kubenswrapper[4752]: I0122 10:59:04.792472 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2xlgr" Jan 22 10:59:04 crc kubenswrapper[4752]: I0122 10:59:04.848063 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad07cbe6-0802-4807-84a4-5730f8698e1a-ssh-key-openstack-edpm-ipam\") pod \"ad07cbe6-0802-4807-84a4-5730f8698e1a\" (UID: \"ad07cbe6-0802-4807-84a4-5730f8698e1a\") " Jan 22 10:59:04 crc kubenswrapper[4752]: I0122 10:59:04.848220 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5njws\" (UniqueName: \"kubernetes.io/projected/ad07cbe6-0802-4807-84a4-5730f8698e1a-kube-api-access-5njws\") pod \"ad07cbe6-0802-4807-84a4-5730f8698e1a\" (UID: \"ad07cbe6-0802-4807-84a4-5730f8698e1a\") " Jan 22 10:59:04 crc kubenswrapper[4752]: I0122 10:59:04.848319 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad07cbe6-0802-4807-84a4-5730f8698e1a-inventory\") pod \"ad07cbe6-0802-4807-84a4-5730f8698e1a\" (UID: \"ad07cbe6-0802-4807-84a4-5730f8698e1a\") " Jan 22 10:59:04 crc kubenswrapper[4752]: I0122 10:59:04.856299 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad07cbe6-0802-4807-84a4-5730f8698e1a-kube-api-access-5njws" (OuterVolumeSpecName: "kube-api-access-5njws") pod "ad07cbe6-0802-4807-84a4-5730f8698e1a" (UID: "ad07cbe6-0802-4807-84a4-5730f8698e1a"). InnerVolumeSpecName "kube-api-access-5njws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:59:04 crc kubenswrapper[4752]: I0122 10:59:04.881537 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad07cbe6-0802-4807-84a4-5730f8698e1a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ad07cbe6-0802-4807-84a4-5730f8698e1a" (UID: "ad07cbe6-0802-4807-84a4-5730f8698e1a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:59:04 crc kubenswrapper[4752]: I0122 10:59:04.882007 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad07cbe6-0802-4807-84a4-5730f8698e1a-inventory" (OuterVolumeSpecName: "inventory") pod "ad07cbe6-0802-4807-84a4-5730f8698e1a" (UID: "ad07cbe6-0802-4807-84a4-5730f8698e1a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:59:04 crc kubenswrapper[4752]: I0122 10:59:04.950950 4752 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad07cbe6-0802-4807-84a4-5730f8698e1a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:04 crc kubenswrapper[4752]: I0122 10:59:04.950978 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5njws\" (UniqueName: \"kubernetes.io/projected/ad07cbe6-0802-4807-84a4-5730f8698e1a-kube-api-access-5njws\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:04 crc kubenswrapper[4752]: I0122 10:59:04.950989 4752 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad07cbe6-0802-4807-84a4-5730f8698e1a-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:05 crc kubenswrapper[4752]: I0122 10:59:05.387258 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2xlgr" event={"ID":"ad07cbe6-0802-4807-84a4-5730f8698e1a","Type":"ContainerDied","Data":"4ebbc0b87762b826fbe90c2131e0660b283bd99c362747097f17798ab82704f3"} Jan 22 10:59:05 crc kubenswrapper[4752]: I0122 10:59:05.387684 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ebbc0b87762b826fbe90c2131e0660b283bd99c362747097f17798ab82704f3" Jan 22 10:59:05 crc kubenswrapper[4752]: I0122 10:59:05.387330 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2xlgr" Jan 22 10:59:05 crc kubenswrapper[4752]: I0122 10:59:05.505129 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r7nrr"] Jan 22 10:59:05 crc kubenswrapper[4752]: E0122 10:59:05.505591 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="727da93c-0026-4fd8-ba89-30257de974d4" containerName="extract-content" Jan 22 10:59:05 crc kubenswrapper[4752]: I0122 10:59:05.505612 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="727da93c-0026-4fd8-ba89-30257de974d4" containerName="extract-content" Jan 22 10:59:05 crc kubenswrapper[4752]: E0122 10:59:05.505644 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="727da93c-0026-4fd8-ba89-30257de974d4" containerName="registry-server" Jan 22 10:59:05 crc kubenswrapper[4752]: I0122 10:59:05.505656 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="727da93c-0026-4fd8-ba89-30257de974d4" containerName="registry-server" Jan 22 10:59:05 crc kubenswrapper[4752]: E0122 10:59:05.505673 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad07cbe6-0802-4807-84a4-5730f8698e1a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 22 10:59:05 crc kubenswrapper[4752]: I0122 10:59:05.505683 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad07cbe6-0802-4807-84a4-5730f8698e1a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 22 10:59:05 crc kubenswrapper[4752]: E0122 10:59:05.505701 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="727da93c-0026-4fd8-ba89-30257de974d4" containerName="extract-utilities" Jan 22 10:59:05 crc kubenswrapper[4752]: I0122 10:59:05.505711 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="727da93c-0026-4fd8-ba89-30257de974d4" containerName="extract-utilities" Jan 22 10:59:05 crc kubenswrapper[4752]: I0122 10:59:05.506015 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad07cbe6-0802-4807-84a4-5730f8698e1a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 22 10:59:05 crc kubenswrapper[4752]: I0122 10:59:05.506040 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="727da93c-0026-4fd8-ba89-30257de974d4" containerName="registry-server" Jan 22 10:59:05 crc kubenswrapper[4752]: I0122 10:59:05.506815 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r7nrr" Jan 22 10:59:05 crc kubenswrapper[4752]: I0122 10:59:05.509071 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j8c6q" Jan 22 10:59:05 crc kubenswrapper[4752]: I0122 10:59:05.510132 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 10:59:05 crc kubenswrapper[4752]: I0122 10:59:05.510258 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 10:59:05 crc kubenswrapper[4752]: I0122 10:59:05.510330 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 10:59:05 crc kubenswrapper[4752]: I0122 10:59:05.518726 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r7nrr"] Jan 22 10:59:05 crc kubenswrapper[4752]: I0122 10:59:05.575485 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-576kx\" (UniqueName: \"kubernetes.io/projected/7f67bb64-83be-481c-a5d5-e836f0741fbc-kube-api-access-576kx\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r7nrr\" (UID: \"7f67bb64-83be-481c-a5d5-e836f0741fbc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r7nrr" Jan 22 10:59:05 crc kubenswrapper[4752]: I0122 10:59:05.575574 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f67bb64-83be-481c-a5d5-e836f0741fbc-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r7nrr\" (UID: \"7f67bb64-83be-481c-a5d5-e836f0741fbc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r7nrr" Jan 22 10:59:05 crc kubenswrapper[4752]: I0122 10:59:05.575635 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f67bb64-83be-481c-a5d5-e836f0741fbc-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r7nrr\" (UID: \"7f67bb64-83be-481c-a5d5-e836f0741fbc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r7nrr" Jan 22 10:59:05 crc kubenswrapper[4752]: I0122 10:59:05.677238 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-576kx\" (UniqueName: \"kubernetes.io/projected/7f67bb64-83be-481c-a5d5-e836f0741fbc-kube-api-access-576kx\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r7nrr\" (UID: \"7f67bb64-83be-481c-a5d5-e836f0741fbc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r7nrr" Jan 22 10:59:05 crc kubenswrapper[4752]: I0122 10:59:05.677352 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f67bb64-83be-481c-a5d5-e836f0741fbc-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r7nrr\" (UID: \"7f67bb64-83be-481c-a5d5-e836f0741fbc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r7nrr" Jan 22 10:59:05 crc kubenswrapper[4752]: I0122 10:59:05.677416 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f67bb64-83be-481c-a5d5-e836f0741fbc-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r7nrr\" (UID: \"7f67bb64-83be-481c-a5d5-e836f0741fbc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r7nrr" Jan 22 10:59:05 crc kubenswrapper[4752]: I0122 10:59:05.683218 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f67bb64-83be-481c-a5d5-e836f0741fbc-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r7nrr\" (UID: \"7f67bb64-83be-481c-a5d5-e836f0741fbc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r7nrr" Jan 22 10:59:05 crc kubenswrapper[4752]: I0122 10:59:05.683291 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f67bb64-83be-481c-a5d5-e836f0741fbc-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r7nrr\" (UID: \"7f67bb64-83be-481c-a5d5-e836f0741fbc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r7nrr" Jan 22 10:59:05 crc kubenswrapper[4752]: I0122 10:59:05.703099 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-576kx\" (UniqueName: \"kubernetes.io/projected/7f67bb64-83be-481c-a5d5-e836f0741fbc-kube-api-access-576kx\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r7nrr\" (UID: \"7f67bb64-83be-481c-a5d5-e836f0741fbc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r7nrr" Jan 22 10:59:05 crc kubenswrapper[4752]: I0122 10:59:05.843995 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r7nrr" Jan 22 10:59:06 crc kubenswrapper[4752]: I0122 10:59:06.472742 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r7nrr"] Jan 22 10:59:07 crc kubenswrapper[4752]: I0122 10:59:07.420625 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r7nrr" event={"ID":"7f67bb64-83be-481c-a5d5-e836f0741fbc","Type":"ContainerStarted","Data":"94e31ed6857793a9b5e1c77e1ddade0db8f6fec9c7e484b3168ceac16cee328c"} Jan 22 10:59:07 crc kubenswrapper[4752]: I0122 10:59:07.421345 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r7nrr" event={"ID":"7f67bb64-83be-481c-a5d5-e836f0741fbc","Type":"ContainerStarted","Data":"d0e9c032d45ee16a1a6b4da97a77b5ac46ff35230627dd91dc34bbdb0c3b3eec"} Jan 22 10:59:07 crc kubenswrapper[4752]: I0122 10:59:07.445774 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r7nrr" podStartSLOduration=1.875119255 podStartE2EDuration="2.44574817s" podCreationTimestamp="2026-01-22 10:59:05 +0000 UTC" firstStartedPulling="2026-01-22 10:59:06.483602947 +0000 UTC m=+2025.713545855" lastFinishedPulling="2026-01-22 10:59:07.054231812 +0000 UTC m=+2026.284174770" observedRunningTime="2026-01-22 10:59:07.441371295 +0000 UTC m=+2026.671314243" watchObservedRunningTime="2026-01-22 10:59:07.44574817 +0000 UTC m=+2026.675691118" Jan 22 10:59:17 crc kubenswrapper[4752]: I0122 10:59:17.522701 4752 generic.go:334] "Generic (PLEG): container finished" podID="7f67bb64-83be-481c-a5d5-e836f0741fbc" containerID="94e31ed6857793a9b5e1c77e1ddade0db8f6fec9c7e484b3168ceac16cee328c" exitCode=0 Jan 22 10:59:17 crc kubenswrapper[4752]: I0122 10:59:17.522806 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r7nrr" event={"ID":"7f67bb64-83be-481c-a5d5-e836f0741fbc","Type":"ContainerDied","Data":"94e31ed6857793a9b5e1c77e1ddade0db8f6fec9c7e484b3168ceac16cee328c"} Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.022007 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r7nrr" Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.094078 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f67bb64-83be-481c-a5d5-e836f0741fbc-inventory\") pod \"7f67bb64-83be-481c-a5d5-e836f0741fbc\" (UID: \"7f67bb64-83be-481c-a5d5-e836f0741fbc\") " Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.094214 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-576kx\" (UniqueName: \"kubernetes.io/projected/7f67bb64-83be-481c-a5d5-e836f0741fbc-kube-api-access-576kx\") pod \"7f67bb64-83be-481c-a5d5-e836f0741fbc\" (UID: \"7f67bb64-83be-481c-a5d5-e836f0741fbc\") " Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.094327 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f67bb64-83be-481c-a5d5-e836f0741fbc-ssh-key-openstack-edpm-ipam\") pod \"7f67bb64-83be-481c-a5d5-e836f0741fbc\" (UID: \"7f67bb64-83be-481c-a5d5-e836f0741fbc\") " Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.104093 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f67bb64-83be-481c-a5d5-e836f0741fbc-kube-api-access-576kx" (OuterVolumeSpecName: "kube-api-access-576kx") pod "7f67bb64-83be-481c-a5d5-e836f0741fbc" (UID: "7f67bb64-83be-481c-a5d5-e836f0741fbc"). InnerVolumeSpecName "kube-api-access-576kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.130951 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f67bb64-83be-481c-a5d5-e836f0741fbc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7f67bb64-83be-481c-a5d5-e836f0741fbc" (UID: "7f67bb64-83be-481c-a5d5-e836f0741fbc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.147785 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f67bb64-83be-481c-a5d5-e836f0741fbc-inventory" (OuterVolumeSpecName: "inventory") pod "7f67bb64-83be-481c-a5d5-e836f0741fbc" (UID: "7f67bb64-83be-481c-a5d5-e836f0741fbc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.196702 4752 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f67bb64-83be-481c-a5d5-e836f0741fbc-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.197021 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-576kx\" (UniqueName: \"kubernetes.io/projected/7f67bb64-83be-481c-a5d5-e836f0741fbc-kube-api-access-576kx\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.197124 4752 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f67bb64-83be-481c-a5d5-e836f0741fbc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.549036 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r7nrr" event={"ID":"7f67bb64-83be-481c-a5d5-e836f0741fbc","Type":"ContainerDied","Data":"d0e9c032d45ee16a1a6b4da97a77b5ac46ff35230627dd91dc34bbdb0c3b3eec"} Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.549105 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0e9c032d45ee16a1a6b4da97a77b5ac46ff35230627dd91dc34bbdb0c3b3eec" Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.549192 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r7nrr" Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.806077 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd"] Jan 22 10:59:19 crc kubenswrapper[4752]: E0122 10:59:19.806553 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f67bb64-83be-481c-a5d5-e836f0741fbc" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.806576 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f67bb64-83be-481c-a5d5-e836f0741fbc" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.806757 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f67bb64-83be-481c-a5d5-e836f0741fbc" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.807453 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.810642 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.813352 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.814182 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.814195 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.814252 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j8c6q" Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.814479 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.814550 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.815475 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.825284 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd"] Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.909762 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slvx4\" (UniqueName: \"kubernetes.io/projected/2930d972-aa34-4715-aec4-9cc44811025e-kube-api-access-slvx4\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.910126 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.910274 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.910356 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2930d972-aa34-4715-aec4-9cc44811025e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.910426 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.910496 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.910568 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2930d972-aa34-4715-aec4-9cc44811025e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.910654 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.910727 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.910796 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.910891 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.911024 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2930d972-aa34-4715-aec4-9cc44811025e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.911119 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:19 crc kubenswrapper[4752]: I0122 10:59:19.911191 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2930d972-aa34-4715-aec4-9cc44811025e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:20 crc kubenswrapper[4752]: I0122 10:59:20.013543 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:20 crc kubenswrapper[4752]: I0122 10:59:20.013590 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:20 crc kubenswrapper[4752]: I0122 10:59:20.013629 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2930d972-aa34-4715-aec4-9cc44811025e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:20 crc kubenswrapper[4752]: I0122 10:59:20.013684 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:20 crc kubenswrapper[4752]: I0122 10:59:20.013807 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:20 crc kubenswrapper[4752]: I0122 10:59:20.013841 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:20 crc kubenswrapper[4752]: I0122 10:59:20.013888 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:20 crc kubenswrapper[4752]: I0122 10:59:20.013936 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2930d972-aa34-4715-aec4-9cc44811025e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:20 crc kubenswrapper[4752]: I0122 10:59:20.013969 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:20 crc kubenswrapper[4752]: I0122 10:59:20.013997 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2930d972-aa34-4715-aec4-9cc44811025e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:20 crc kubenswrapper[4752]: I0122 10:59:20.014050 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slvx4\" (UniqueName: \"kubernetes.io/projected/2930d972-aa34-4715-aec4-9cc44811025e-kube-api-access-slvx4\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:20 crc kubenswrapper[4752]: I0122 10:59:20.014077 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:20 crc kubenswrapper[4752]: I0122 10:59:20.014169 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:20 crc kubenswrapper[4752]: I0122 10:59:20.014220 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2930d972-aa34-4715-aec4-9cc44811025e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:20 crc kubenswrapper[4752]: I0122 10:59:20.018512 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:20 crc kubenswrapper[4752]: I0122 10:59:20.019304 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2930d972-aa34-4715-aec4-9cc44811025e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:20 crc kubenswrapper[4752]: I0122 10:59:20.023083 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:20 crc kubenswrapper[4752]: I0122 10:59:20.023389 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:20 crc kubenswrapper[4752]: I0122 10:59:20.024011 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:20 crc kubenswrapper[4752]: I0122 10:59:20.024095 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:20 crc kubenswrapper[4752]: I0122 10:59:20.024748 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:20 crc kubenswrapper[4752]: I0122 10:59:20.024755 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:20 crc kubenswrapper[4752]: I0122 10:59:20.024907 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:20 crc kubenswrapper[4752]: I0122 10:59:20.026903 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2930d972-aa34-4715-aec4-9cc44811025e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:20 crc kubenswrapper[4752]: I0122 10:59:20.029837 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2930d972-aa34-4715-aec4-9cc44811025e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:20 crc kubenswrapper[4752]: I0122 10:59:20.030532 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2930d972-aa34-4715-aec4-9cc44811025e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:20 crc kubenswrapper[4752]: I0122 10:59:20.032305 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slvx4\" (UniqueName: \"kubernetes.io/projected/2930d972-aa34-4715-aec4-9cc44811025e-kube-api-access-slvx4\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:20 crc kubenswrapper[4752]: I0122 10:59:20.036299 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:20 crc kubenswrapper[4752]: I0122 10:59:20.169107 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 10:59:20 crc kubenswrapper[4752]: W0122 10:59:20.719994 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2930d972_aa34_4715_aec4_9cc44811025e.slice/crio-ee7819287eb99eef90f2cf272ab0dc27949b65bc7292e6bc1a42f9729ad1266d WatchSource:0}: Error finding container ee7819287eb99eef90f2cf272ab0dc27949b65bc7292e6bc1a42f9729ad1266d: Status 404 returned error can't find the container with id ee7819287eb99eef90f2cf272ab0dc27949b65bc7292e6bc1a42f9729ad1266d Jan 22 10:59:20 crc kubenswrapper[4752]: I0122 10:59:20.721531 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd"] Jan 22 10:59:21 crc kubenswrapper[4752]: I0122 10:59:21.150711 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 10:59:21 crc kubenswrapper[4752]: I0122 10:59:21.569735 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" event={"ID":"2930d972-aa34-4715-aec4-9cc44811025e","Type":"ContainerStarted","Data":"5e5e2018533befc3ef993d5f38173d0c0163164697773fc70b4ea0360e41e676"} Jan 22 10:59:21 crc kubenswrapper[4752]: I0122 10:59:21.570064 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" event={"ID":"2930d972-aa34-4715-aec4-9cc44811025e","Type":"ContainerStarted","Data":"ee7819287eb99eef90f2cf272ab0dc27949b65bc7292e6bc1a42f9729ad1266d"} Jan 22 10:59:21 crc kubenswrapper[4752]: I0122 10:59:21.603276 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" podStartSLOduration=2.177545555 podStartE2EDuration="2.603257056s" podCreationTimestamp="2026-01-22 10:59:19 +0000 UTC" firstStartedPulling="2026-01-22 10:59:20.722443938 +0000 UTC m=+2039.952386846" lastFinishedPulling="2026-01-22 10:59:21.148155439 +0000 UTC m=+2040.378098347" observedRunningTime="2026-01-22 10:59:21.597800982 +0000 UTC m=+2040.827743900" watchObservedRunningTime="2026-01-22 10:59:21.603257056 +0000 UTC m=+2040.833199964" Jan 22 11:00:00 crc kubenswrapper[4752]: I0122 11:00:00.160155 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484660-6qwwd"] Jan 22 11:00:00 crc kubenswrapper[4752]: I0122 11:00:00.162377 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484660-6qwwd" Jan 22 11:00:00 crc kubenswrapper[4752]: I0122 11:00:00.164929 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 11:00:00 crc kubenswrapper[4752]: I0122 11:00:00.171291 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484660-6qwwd"] Jan 22 11:00:00 crc kubenswrapper[4752]: I0122 11:00:00.185005 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 11:00:00 crc kubenswrapper[4752]: I0122 11:00:00.328135 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbhlq\" (UniqueName: \"kubernetes.io/projected/4e61af5a-ef99-48d5-9d12-8e3ad639a94f-kube-api-access-gbhlq\") pod \"collect-profiles-29484660-6qwwd\" (UID: \"4e61af5a-ef99-48d5-9d12-8e3ad639a94f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484660-6qwwd" Jan 22 11:00:00 crc kubenswrapper[4752]: I0122 11:00:00.328311 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e61af5a-ef99-48d5-9d12-8e3ad639a94f-config-volume\") pod \"collect-profiles-29484660-6qwwd\" (UID: \"4e61af5a-ef99-48d5-9d12-8e3ad639a94f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484660-6qwwd" Jan 22 11:00:00 crc kubenswrapper[4752]: I0122 11:00:00.328374 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e61af5a-ef99-48d5-9d12-8e3ad639a94f-secret-volume\") pod \"collect-profiles-29484660-6qwwd\" (UID: \"4e61af5a-ef99-48d5-9d12-8e3ad639a94f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484660-6qwwd" Jan 22 11:00:00 crc kubenswrapper[4752]: I0122 11:00:00.430506 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e61af5a-ef99-48d5-9d12-8e3ad639a94f-config-volume\") pod \"collect-profiles-29484660-6qwwd\" (UID: \"4e61af5a-ef99-48d5-9d12-8e3ad639a94f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484660-6qwwd" Jan 22 11:00:00 crc kubenswrapper[4752]: I0122 11:00:00.430611 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e61af5a-ef99-48d5-9d12-8e3ad639a94f-secret-volume\") pod \"collect-profiles-29484660-6qwwd\" (UID: \"4e61af5a-ef99-48d5-9d12-8e3ad639a94f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484660-6qwwd" Jan 22 11:00:00 crc kubenswrapper[4752]: I0122 11:00:00.430661 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbhlq\" (UniqueName: \"kubernetes.io/projected/4e61af5a-ef99-48d5-9d12-8e3ad639a94f-kube-api-access-gbhlq\") pod \"collect-profiles-29484660-6qwwd\" (UID: \"4e61af5a-ef99-48d5-9d12-8e3ad639a94f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484660-6qwwd" Jan 22 11:00:00 crc kubenswrapper[4752]: I0122 11:00:00.431651 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e61af5a-ef99-48d5-9d12-8e3ad639a94f-config-volume\") pod \"collect-profiles-29484660-6qwwd\" (UID: \"4e61af5a-ef99-48d5-9d12-8e3ad639a94f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484660-6qwwd" Jan 22 11:00:00 crc kubenswrapper[4752]: I0122 11:00:00.437919 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e61af5a-ef99-48d5-9d12-8e3ad639a94f-secret-volume\") pod \"collect-profiles-29484660-6qwwd\" (UID: \"4e61af5a-ef99-48d5-9d12-8e3ad639a94f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484660-6qwwd" Jan 22 11:00:00 crc kubenswrapper[4752]: I0122 11:00:00.448460 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbhlq\" (UniqueName: \"kubernetes.io/projected/4e61af5a-ef99-48d5-9d12-8e3ad639a94f-kube-api-access-gbhlq\") pod \"collect-profiles-29484660-6qwwd\" (UID: \"4e61af5a-ef99-48d5-9d12-8e3ad639a94f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484660-6qwwd" Jan 22 11:00:00 crc kubenswrapper[4752]: I0122 11:00:00.513249 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484660-6qwwd" Jan 22 11:00:00 crc kubenswrapper[4752]: I0122 11:00:00.962378 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484660-6qwwd"] Jan 22 11:00:01 crc kubenswrapper[4752]: I0122 11:00:01.968257 4752 generic.go:334] "Generic (PLEG): container finished" podID="4e61af5a-ef99-48d5-9d12-8e3ad639a94f" containerID="6e8daad72678adf1d19b47f243e1d700f1f66093fe4f2ce9a81aa25422f40e32" exitCode=0 Jan 22 11:00:01 crc kubenswrapper[4752]: I0122 11:00:01.968318 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484660-6qwwd" event={"ID":"4e61af5a-ef99-48d5-9d12-8e3ad639a94f","Type":"ContainerDied","Data":"6e8daad72678adf1d19b47f243e1d700f1f66093fe4f2ce9a81aa25422f40e32"} Jan 22 11:00:01 crc kubenswrapper[4752]: I0122 11:00:01.968746 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484660-6qwwd" event={"ID":"4e61af5a-ef99-48d5-9d12-8e3ad639a94f","Type":"ContainerStarted","Data":"6b2be0890c32bb73426bc8e92a824088096a078bdf94fb02146354cbcf4d5574"} Jan 22 11:00:03 crc kubenswrapper[4752]: I0122 11:00:03.363784 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484660-6qwwd" Jan 22 11:00:03 crc kubenswrapper[4752]: I0122 11:00:03.503352 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e61af5a-ef99-48d5-9d12-8e3ad639a94f-config-volume\") pod \"4e61af5a-ef99-48d5-9d12-8e3ad639a94f\" (UID: \"4e61af5a-ef99-48d5-9d12-8e3ad639a94f\") " Jan 22 11:00:03 crc kubenswrapper[4752]: I0122 11:00:03.503510 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e61af5a-ef99-48d5-9d12-8e3ad639a94f-secret-volume\") pod \"4e61af5a-ef99-48d5-9d12-8e3ad639a94f\" (UID: \"4e61af5a-ef99-48d5-9d12-8e3ad639a94f\") " Jan 22 11:00:03 crc kubenswrapper[4752]: I0122 11:00:03.503545 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbhlq\" (UniqueName: \"kubernetes.io/projected/4e61af5a-ef99-48d5-9d12-8e3ad639a94f-kube-api-access-gbhlq\") pod \"4e61af5a-ef99-48d5-9d12-8e3ad639a94f\" (UID: \"4e61af5a-ef99-48d5-9d12-8e3ad639a94f\") " Jan 22 11:00:03 crc kubenswrapper[4752]: I0122 11:00:03.504206 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e61af5a-ef99-48d5-9d12-8e3ad639a94f-config-volume" (OuterVolumeSpecName: "config-volume") pod "4e61af5a-ef99-48d5-9d12-8e3ad639a94f" (UID: "4e61af5a-ef99-48d5-9d12-8e3ad639a94f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:00:03 crc kubenswrapper[4752]: I0122 11:00:03.510988 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e61af5a-ef99-48d5-9d12-8e3ad639a94f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4e61af5a-ef99-48d5-9d12-8e3ad639a94f" (UID: "4e61af5a-ef99-48d5-9d12-8e3ad639a94f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:00:03 crc kubenswrapper[4752]: I0122 11:00:03.511014 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e61af5a-ef99-48d5-9d12-8e3ad639a94f-kube-api-access-gbhlq" (OuterVolumeSpecName: "kube-api-access-gbhlq") pod "4e61af5a-ef99-48d5-9d12-8e3ad639a94f" (UID: "4e61af5a-ef99-48d5-9d12-8e3ad639a94f"). InnerVolumeSpecName "kube-api-access-gbhlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:00:03 crc kubenswrapper[4752]: I0122 11:00:03.605790 4752 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e61af5a-ef99-48d5-9d12-8e3ad639a94f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:03 crc kubenswrapper[4752]: I0122 11:00:03.605837 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbhlq\" (UniqueName: \"kubernetes.io/projected/4e61af5a-ef99-48d5-9d12-8e3ad639a94f-kube-api-access-gbhlq\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:03 crc kubenswrapper[4752]: I0122 11:00:03.605868 4752 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e61af5a-ef99-48d5-9d12-8e3ad639a94f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:03 crc kubenswrapper[4752]: I0122 11:00:03.987694 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484660-6qwwd" event={"ID":"4e61af5a-ef99-48d5-9d12-8e3ad639a94f","Type":"ContainerDied","Data":"6b2be0890c32bb73426bc8e92a824088096a078bdf94fb02146354cbcf4d5574"} Jan 22 11:00:03 crc kubenswrapper[4752]: I0122 11:00:03.987761 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b2be0890c32bb73426bc8e92a824088096a078bdf94fb02146354cbcf4d5574" Jan 22 11:00:03 crc kubenswrapper[4752]: I0122 11:00:03.987764 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484660-6qwwd" Jan 22 11:00:03 crc kubenswrapper[4752]: I0122 11:00:03.989529 4752 generic.go:334] "Generic (PLEG): container finished" podID="2930d972-aa34-4715-aec4-9cc44811025e" containerID="5e5e2018533befc3ef993d5f38173d0c0163164697773fc70b4ea0360e41e676" exitCode=0 Jan 22 11:00:03 crc kubenswrapper[4752]: I0122 11:00:03.989563 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" event={"ID":"2930d972-aa34-4715-aec4-9cc44811025e","Type":"ContainerDied","Data":"5e5e2018533befc3ef993d5f38173d0c0163164697773fc70b4ea0360e41e676"} Jan 22 11:00:04 crc kubenswrapper[4752]: I0122 11:00:04.442709 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484615-wmzmg"] Jan 22 11:00:04 crc kubenswrapper[4752]: I0122 11:00:04.450713 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484615-wmzmg"] Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.112111 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ed74970-560a-4f45-84e8-ebedcaf74392" path="/var/lib/kubelet/pods/2ed74970-560a-4f45-84e8-ebedcaf74392/volumes" Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.466654 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.542972 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-nova-combined-ca-bundle\") pod \"2930d972-aa34-4715-aec4-9cc44811025e\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.543049 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slvx4\" (UniqueName: \"kubernetes.io/projected/2930d972-aa34-4715-aec4-9cc44811025e-kube-api-access-slvx4\") pod \"2930d972-aa34-4715-aec4-9cc44811025e\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.543114 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-libvirt-combined-ca-bundle\") pod \"2930d972-aa34-4715-aec4-9cc44811025e\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.543147 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-neutron-metadata-combined-ca-bundle\") pod \"2930d972-aa34-4715-aec4-9cc44811025e\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.543183 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-ovn-combined-ca-bundle\") pod \"2930d972-aa34-4715-aec4-9cc44811025e\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.543211 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2930d972-aa34-4715-aec4-9cc44811025e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"2930d972-aa34-4715-aec4-9cc44811025e\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.543263 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2930d972-aa34-4715-aec4-9cc44811025e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"2930d972-aa34-4715-aec4-9cc44811025e\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.543298 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-inventory\") pod \"2930d972-aa34-4715-aec4-9cc44811025e\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.543326 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2930d972-aa34-4715-aec4-9cc44811025e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"2930d972-aa34-4715-aec4-9cc44811025e\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.543359 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-telemetry-combined-ca-bundle\") pod \"2930d972-aa34-4715-aec4-9cc44811025e\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.543383 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2930d972-aa34-4715-aec4-9cc44811025e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"2930d972-aa34-4715-aec4-9cc44811025e\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.543411 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-repo-setup-combined-ca-bundle\") pod \"2930d972-aa34-4715-aec4-9cc44811025e\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.543438 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-ssh-key-openstack-edpm-ipam\") pod \"2930d972-aa34-4715-aec4-9cc44811025e\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.543468 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-bootstrap-combined-ca-bundle\") pod \"2930d972-aa34-4715-aec4-9cc44811025e\" (UID: \"2930d972-aa34-4715-aec4-9cc44811025e\") " Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.553067 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "2930d972-aa34-4715-aec4-9cc44811025e" (UID: "2930d972-aa34-4715-aec4-9cc44811025e"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.553951 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "2930d972-aa34-4715-aec4-9cc44811025e" (UID: "2930d972-aa34-4715-aec4-9cc44811025e"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.554107 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "2930d972-aa34-4715-aec4-9cc44811025e" (UID: "2930d972-aa34-4715-aec4-9cc44811025e"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.555390 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2930d972-aa34-4715-aec4-9cc44811025e-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "2930d972-aa34-4715-aec4-9cc44811025e" (UID: "2930d972-aa34-4715-aec4-9cc44811025e"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.557257 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "2930d972-aa34-4715-aec4-9cc44811025e" (UID: "2930d972-aa34-4715-aec4-9cc44811025e"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.557358 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2930d972-aa34-4715-aec4-9cc44811025e-kube-api-access-slvx4" (OuterVolumeSpecName: "kube-api-access-slvx4") pod "2930d972-aa34-4715-aec4-9cc44811025e" (UID: "2930d972-aa34-4715-aec4-9cc44811025e"). InnerVolumeSpecName "kube-api-access-slvx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.562974 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "2930d972-aa34-4715-aec4-9cc44811025e" (UID: "2930d972-aa34-4715-aec4-9cc44811025e"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.562957 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "2930d972-aa34-4715-aec4-9cc44811025e" (UID: "2930d972-aa34-4715-aec4-9cc44811025e"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.563017 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2930d972-aa34-4715-aec4-9cc44811025e-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "2930d972-aa34-4715-aec4-9cc44811025e" (UID: "2930d972-aa34-4715-aec4-9cc44811025e"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.564746 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "2930d972-aa34-4715-aec4-9cc44811025e" (UID: "2930d972-aa34-4715-aec4-9cc44811025e"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.582052 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2930d972-aa34-4715-aec4-9cc44811025e-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "2930d972-aa34-4715-aec4-9cc44811025e" (UID: "2930d972-aa34-4715-aec4-9cc44811025e"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.586372 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2930d972-aa34-4715-aec4-9cc44811025e-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "2930d972-aa34-4715-aec4-9cc44811025e" (UID: "2930d972-aa34-4715-aec4-9cc44811025e"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.589188 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2930d972-aa34-4715-aec4-9cc44811025e" (UID: "2930d972-aa34-4715-aec4-9cc44811025e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.590615 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-inventory" (OuterVolumeSpecName: "inventory") pod "2930d972-aa34-4715-aec4-9cc44811025e" (UID: "2930d972-aa34-4715-aec4-9cc44811025e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.645866 4752 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.645904 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slvx4\" (UniqueName: \"kubernetes.io/projected/2930d972-aa34-4715-aec4-9cc44811025e-kube-api-access-slvx4\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.645913 4752 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.645923 4752 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.645935 4752 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.645946 4752 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2930d972-aa34-4715-aec4-9cc44811025e-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.645956 4752 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2930d972-aa34-4715-aec4-9cc44811025e-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.645966 4752 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.645977 4752 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2930d972-aa34-4715-aec4-9cc44811025e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.645987 4752 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.645995 4752 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2930d972-aa34-4715-aec4-9cc44811025e-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.646007 4752 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.646016 4752 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:05 crc kubenswrapper[4752]: I0122 11:00:05.646024 4752 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2930d972-aa34-4715-aec4-9cc44811025e-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:00:06 crc kubenswrapper[4752]: I0122 11:00:06.007101 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" event={"ID":"2930d972-aa34-4715-aec4-9cc44811025e","Type":"ContainerDied","Data":"ee7819287eb99eef90f2cf272ab0dc27949b65bc7292e6bc1a42f9729ad1266d"} Jan 22 11:00:06 crc kubenswrapper[4752]: I0122 11:00:06.007140 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee7819287eb99eef90f2cf272ab0dc27949b65bc7292e6bc1a42f9729ad1266d" Jan 22 11:00:06 crc kubenswrapper[4752]: I0122 11:00:06.007206 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5kgd" Jan 22 11:00:06 crc kubenswrapper[4752]: I0122 11:00:06.095114 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-8cpbn"] Jan 22 11:00:06 crc kubenswrapper[4752]: E0122 11:00:06.095529 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2930d972-aa34-4715-aec4-9cc44811025e" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 22 11:00:06 crc kubenswrapper[4752]: I0122 11:00:06.095552 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="2930d972-aa34-4715-aec4-9cc44811025e" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 22 11:00:06 crc kubenswrapper[4752]: E0122 11:00:06.095582 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e61af5a-ef99-48d5-9d12-8e3ad639a94f" containerName="collect-profiles" Jan 22 11:00:06 crc kubenswrapper[4752]: I0122 11:00:06.095589 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e61af5a-ef99-48d5-9d12-8e3ad639a94f" containerName="collect-profiles" Jan 22 11:00:06 crc kubenswrapper[4752]: I0122 11:00:06.095762 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="2930d972-aa34-4715-aec4-9cc44811025e" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 22 11:00:06 crc kubenswrapper[4752]: I0122 11:00:06.095778 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e61af5a-ef99-48d5-9d12-8e3ad639a94f" containerName="collect-profiles" Jan 22 11:00:06 crc kubenswrapper[4752]: I0122 11:00:06.096459 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8cpbn" Jan 22 11:00:06 crc kubenswrapper[4752]: I0122 11:00:06.098968 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 11:00:06 crc kubenswrapper[4752]: I0122 11:00:06.099193 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j8c6q" Jan 22 11:00:06 crc kubenswrapper[4752]: I0122 11:00:06.099215 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 11:00:06 crc kubenswrapper[4752]: I0122 11:00:06.099307 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 11:00:06 crc kubenswrapper[4752]: I0122 11:00:06.099441 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 22 11:00:06 crc kubenswrapper[4752]: I0122 11:00:06.135989 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-8cpbn"] Jan 22 11:00:06 crc kubenswrapper[4752]: I0122 11:00:06.155336 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5qlz\" (UniqueName: \"kubernetes.io/projected/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841-kube-api-access-l5qlz\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8cpbn\" (UID: \"fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8cpbn" Jan 22 11:00:06 crc kubenswrapper[4752]: I0122 11:00:06.155411 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8cpbn\" (UID: \"fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8cpbn" Jan 22 11:00:06 crc kubenswrapper[4752]: I0122 11:00:06.155449 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8cpbn\" (UID: \"fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8cpbn" Jan 22 11:00:06 crc kubenswrapper[4752]: I0122 11:00:06.155664 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8cpbn\" (UID: \"fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8cpbn" Jan 22 11:00:06 crc kubenswrapper[4752]: I0122 11:00:06.155981 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8cpbn\" (UID: \"fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8cpbn" Jan 22 11:00:06 crc kubenswrapper[4752]: I0122 11:00:06.257836 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8cpbn\" (UID: \"fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8cpbn" Jan 22 11:00:06 crc kubenswrapper[4752]: I0122 11:00:06.258014 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8cpbn\" (UID: \"fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8cpbn" Jan 22 11:00:06 crc kubenswrapper[4752]: I0122 11:00:06.258111 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5qlz\" (UniqueName: \"kubernetes.io/projected/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841-kube-api-access-l5qlz\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8cpbn\" (UID: \"fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8cpbn" Jan 22 11:00:06 crc kubenswrapper[4752]: I0122 11:00:06.258138 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8cpbn\" (UID: \"fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8cpbn" Jan 22 11:00:06 crc kubenswrapper[4752]: I0122 11:00:06.258159 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8cpbn\" (UID: \"fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8cpbn" Jan 22 11:00:06 crc kubenswrapper[4752]: I0122 11:00:06.260239 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8cpbn\" (UID: \"fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8cpbn" Jan 22 11:00:06 crc kubenswrapper[4752]: I0122 11:00:06.263568 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8cpbn\" (UID: \"fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8cpbn" Jan 22 11:00:06 crc kubenswrapper[4752]: I0122 11:00:06.264273 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8cpbn\" (UID: \"fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8cpbn" Jan 22 11:00:06 crc kubenswrapper[4752]: I0122 11:00:06.264695 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8cpbn\" (UID: \"fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8cpbn" Jan 22 11:00:06 crc kubenswrapper[4752]: I0122 11:00:06.279414 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5qlz\" (UniqueName: \"kubernetes.io/projected/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841-kube-api-access-l5qlz\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8cpbn\" (UID: \"fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8cpbn" Jan 22 11:00:06 crc kubenswrapper[4752]: I0122 11:00:06.413675 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8cpbn" Jan 22 11:00:07 crc kubenswrapper[4752]: I0122 11:00:07.079366 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-8cpbn"] Jan 22 11:00:08 crc kubenswrapper[4752]: I0122 11:00:08.029458 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8cpbn" event={"ID":"fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841","Type":"ContainerStarted","Data":"644c0830cb25fb15b2b34e93759133a7e1d44c86807ccd54a0448f5fb0bc839d"} Jan 22 11:00:09 crc kubenswrapper[4752]: I0122 11:00:09.040777 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8cpbn" event={"ID":"fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841","Type":"ContainerStarted","Data":"e5e8239ed82d88149e42f06ff6763a67ea8e9516917156c03b2ce7d9da84b594"} Jan 22 11:00:09 crc kubenswrapper[4752]: I0122 11:00:09.061470 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8cpbn" podStartSLOduration=2.097119781 podStartE2EDuration="3.061452252s" podCreationTimestamp="2026-01-22 11:00:06 +0000 UTC" firstStartedPulling="2026-01-22 11:00:07.079067724 +0000 UTC m=+2086.309010642" lastFinishedPulling="2026-01-22 11:00:08.043400195 +0000 UTC m=+2087.273343113" observedRunningTime="2026-01-22 11:00:09.058244738 +0000 UTC m=+2088.288187656" watchObservedRunningTime="2026-01-22 11:00:09.061452252 +0000 UTC m=+2088.291395170" Jan 22 11:00:46 crc kubenswrapper[4752]: I0122 11:00:46.429945 4752 scope.go:117] "RemoveContainer" containerID="143b79a97f47ed7bf1634ee1af19726cf8d6eb4d7a54e090d20294d534a7338b" Jan 22 11:00:57 crc kubenswrapper[4752]: I0122 11:00:57.724278 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:00:57 crc kubenswrapper[4752]: I0122 11:00:57.725226 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:01:00 crc kubenswrapper[4752]: I0122 11:01:00.155998 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29484661-m7d5x"] Jan 22 11:01:00 crc kubenswrapper[4752]: I0122 11:01:00.157619 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29484661-m7d5x" Jan 22 11:01:00 crc kubenswrapper[4752]: I0122 11:01:00.169457 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29484661-m7d5x"] Jan 22 11:01:00 crc kubenswrapper[4752]: I0122 11:01:00.241153 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c-config-data\") pod \"keystone-cron-29484661-m7d5x\" (UID: \"2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c\") " pod="openstack/keystone-cron-29484661-m7d5x" Jan 22 11:01:00 crc kubenswrapper[4752]: I0122 11:01:00.241365 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fch5x\" (UniqueName: \"kubernetes.io/projected/2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c-kube-api-access-fch5x\") pod \"keystone-cron-29484661-m7d5x\" (UID: \"2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c\") " pod="openstack/keystone-cron-29484661-m7d5x" Jan 22 11:01:00 crc kubenswrapper[4752]: I0122 11:01:00.241474 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c-fernet-keys\") pod \"keystone-cron-29484661-m7d5x\" (UID: \"2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c\") " pod="openstack/keystone-cron-29484661-m7d5x" Jan 22 11:01:00 crc kubenswrapper[4752]: I0122 11:01:00.241785 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c-combined-ca-bundle\") pod \"keystone-cron-29484661-m7d5x\" (UID: \"2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c\") " pod="openstack/keystone-cron-29484661-m7d5x" Jan 22 11:01:00 crc kubenswrapper[4752]: I0122 11:01:00.344741 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c-combined-ca-bundle\") pod \"keystone-cron-29484661-m7d5x\" (UID: \"2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c\") " pod="openstack/keystone-cron-29484661-m7d5x" Jan 22 11:01:00 crc kubenswrapper[4752]: I0122 11:01:00.345403 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c-config-data\") pod \"keystone-cron-29484661-m7d5x\" (UID: \"2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c\") " pod="openstack/keystone-cron-29484661-m7d5x" Jan 22 11:01:00 crc kubenswrapper[4752]: I0122 11:01:00.345530 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fch5x\" (UniqueName: \"kubernetes.io/projected/2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c-kube-api-access-fch5x\") pod \"keystone-cron-29484661-m7d5x\" (UID: \"2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c\") " pod="openstack/keystone-cron-29484661-m7d5x" Jan 22 11:01:00 crc kubenswrapper[4752]: I0122 11:01:00.345579 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c-fernet-keys\") pod \"keystone-cron-29484661-m7d5x\" (UID: \"2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c\") " pod="openstack/keystone-cron-29484661-m7d5x" Jan 22 11:01:00 crc kubenswrapper[4752]: I0122 11:01:00.355582 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c-combined-ca-bundle\") pod \"keystone-cron-29484661-m7d5x\" (UID: \"2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c\") " pod="openstack/keystone-cron-29484661-m7d5x" Jan 22 11:01:00 crc kubenswrapper[4752]: I0122 11:01:00.355875 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c-config-data\") pod \"keystone-cron-29484661-m7d5x\" (UID: \"2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c\") " pod="openstack/keystone-cron-29484661-m7d5x" Jan 22 11:01:00 crc kubenswrapper[4752]: I0122 11:01:00.368141 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c-fernet-keys\") pod \"keystone-cron-29484661-m7d5x\" (UID: \"2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c\") " pod="openstack/keystone-cron-29484661-m7d5x" Jan 22 11:01:00 crc kubenswrapper[4752]: I0122 11:01:00.370106 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fch5x\" (UniqueName: \"kubernetes.io/projected/2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c-kube-api-access-fch5x\") pod \"keystone-cron-29484661-m7d5x\" (UID: \"2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c\") " pod="openstack/keystone-cron-29484661-m7d5x" Jan 22 11:01:00 crc kubenswrapper[4752]: I0122 11:01:00.483041 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29484661-m7d5x" Jan 22 11:01:00 crc kubenswrapper[4752]: I0122 11:01:00.998209 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29484661-m7d5x"] Jan 22 11:01:01 crc kubenswrapper[4752]: I0122 11:01:01.542160 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29484661-m7d5x" event={"ID":"2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c","Type":"ContainerStarted","Data":"f978ddbbcea4b4d23ddec36a09e734b9a77c3986ba626865e60fd47e4cea017b"} Jan 22 11:01:01 crc kubenswrapper[4752]: I0122 11:01:01.543833 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29484661-m7d5x" event={"ID":"2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c","Type":"ContainerStarted","Data":"c11cc71891a6ae7f3344f2c1812d59a0d3c6e1f7c3bf01f4426f3bf5172b334d"} Jan 22 11:01:01 crc kubenswrapper[4752]: I0122 11:01:01.562954 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29484661-m7d5x" podStartSLOduration=1.562928587 podStartE2EDuration="1.562928587s" podCreationTimestamp="2026-01-22 11:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:01:01.561676764 +0000 UTC m=+2140.791619672" watchObservedRunningTime="2026-01-22 11:01:01.562928587 +0000 UTC m=+2140.792871495" Jan 22 11:01:04 crc kubenswrapper[4752]: I0122 11:01:04.570011 4752 generic.go:334] "Generic (PLEG): container finished" podID="2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c" containerID="f978ddbbcea4b4d23ddec36a09e734b9a77c3986ba626865e60fd47e4cea017b" exitCode=0 Jan 22 11:01:04 crc kubenswrapper[4752]: I0122 11:01:04.570111 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29484661-m7d5x" event={"ID":"2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c","Type":"ContainerDied","Data":"f978ddbbcea4b4d23ddec36a09e734b9a77c3986ba626865e60fd47e4cea017b"} Jan 22 11:01:05 crc kubenswrapper[4752]: I0122 11:01:05.971971 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29484661-m7d5x" Jan 22 11:01:06 crc kubenswrapper[4752]: I0122 11:01:06.051378 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c-config-data\") pod \"2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c\" (UID: \"2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c\") " Jan 22 11:01:06 crc kubenswrapper[4752]: I0122 11:01:06.051523 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fch5x\" (UniqueName: \"kubernetes.io/projected/2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c-kube-api-access-fch5x\") pod \"2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c\" (UID: \"2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c\") " Jan 22 11:01:06 crc kubenswrapper[4752]: I0122 11:01:06.051706 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c-combined-ca-bundle\") pod \"2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c\" (UID: \"2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c\") " Jan 22 11:01:06 crc kubenswrapper[4752]: I0122 11:01:06.051744 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c-fernet-keys\") pod \"2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c\" (UID: \"2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c\") " Jan 22 11:01:06 crc kubenswrapper[4752]: I0122 11:01:06.057986 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c" (UID: "2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:01:06 crc kubenswrapper[4752]: I0122 11:01:06.059637 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c-kube-api-access-fch5x" (OuterVolumeSpecName: "kube-api-access-fch5x") pod "2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c" (UID: "2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c"). InnerVolumeSpecName "kube-api-access-fch5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:01:06 crc kubenswrapper[4752]: I0122 11:01:06.083225 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c" (UID: "2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:01:06 crc kubenswrapper[4752]: I0122 11:01:06.114315 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c-config-data" (OuterVolumeSpecName: "config-data") pod "2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c" (UID: "2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:01:06 crc kubenswrapper[4752]: I0122 11:01:06.153723 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:01:06 crc kubenswrapper[4752]: I0122 11:01:06.153758 4752 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 22 11:01:06 crc kubenswrapper[4752]: I0122 11:01:06.153767 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 11:01:06 crc kubenswrapper[4752]: I0122 11:01:06.153776 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fch5x\" (UniqueName: \"kubernetes.io/projected/2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c-kube-api-access-fch5x\") on node \"crc\" DevicePath \"\"" Jan 22 11:01:06 crc kubenswrapper[4752]: I0122 11:01:06.591358 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29484661-m7d5x" event={"ID":"2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c","Type":"ContainerDied","Data":"c11cc71891a6ae7f3344f2c1812d59a0d3c6e1f7c3bf01f4426f3bf5172b334d"} Jan 22 11:01:06 crc kubenswrapper[4752]: I0122 11:01:06.591399 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c11cc71891a6ae7f3344f2c1812d59a0d3c6e1f7c3bf01f4426f3bf5172b334d" Jan 22 11:01:06 crc kubenswrapper[4752]: I0122 11:01:06.591457 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29484661-m7d5x" Jan 22 11:01:21 crc kubenswrapper[4752]: I0122 11:01:21.752618 4752 generic.go:334] "Generic (PLEG): container finished" podID="fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841" containerID="e5e8239ed82d88149e42f06ff6763a67ea8e9516917156c03b2ce7d9da84b594" exitCode=0 Jan 22 11:01:21 crc kubenswrapper[4752]: I0122 11:01:21.752721 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8cpbn" event={"ID":"fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841","Type":"ContainerDied","Data":"e5e8239ed82d88149e42f06ff6763a67ea8e9516917156c03b2ce7d9da84b594"} Jan 22 11:01:23 crc kubenswrapper[4752]: I0122 11:01:23.238203 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8cpbn" Jan 22 11:01:23 crc kubenswrapper[4752]: I0122 11:01:23.307890 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5qlz\" (UniqueName: \"kubernetes.io/projected/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841-kube-api-access-l5qlz\") pod \"fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841\" (UID: \"fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841\") " Jan 22 11:01:23 crc kubenswrapper[4752]: I0122 11:01:23.308019 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841-ovncontroller-config-0\") pod \"fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841\" (UID: \"fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841\") " Jan 22 11:01:23 crc kubenswrapper[4752]: I0122 11:01:23.308065 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841-ovn-combined-ca-bundle\") pod \"fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841\" (UID: \"fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841\") " Jan 22 11:01:23 crc kubenswrapper[4752]: I0122 11:01:23.308080 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841-inventory\") pod \"fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841\" (UID: \"fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841\") " Jan 22 11:01:23 crc kubenswrapper[4752]: I0122 11:01:23.308214 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841-ssh-key-openstack-edpm-ipam\") pod \"fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841\" (UID: \"fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841\") " Jan 22 11:01:23 crc kubenswrapper[4752]: I0122 11:01:23.313060 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841" (UID: "fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:01:23 crc kubenswrapper[4752]: I0122 11:01:23.313152 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841-kube-api-access-l5qlz" (OuterVolumeSpecName: "kube-api-access-l5qlz") pod "fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841" (UID: "fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841"). InnerVolumeSpecName "kube-api-access-l5qlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:01:23 crc kubenswrapper[4752]: E0122 11:01:23.333986 4752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841-inventory podName:fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841 nodeName:}" failed. No retries permitted until 2026-01-22 11:01:23.833961286 +0000 UTC m=+2163.063904194 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "inventory" (UniqueName: "kubernetes.io/secret/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841-inventory") pod "fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841" (UID: "fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841") : error deleting /var/lib/kubelet/pods/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841/volume-subpaths: remove /var/lib/kubelet/pods/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841/volume-subpaths: no such file or directory Jan 22 11:01:23 crc kubenswrapper[4752]: I0122 11:01:23.334383 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841" (UID: "fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:01:23 crc kubenswrapper[4752]: I0122 11:01:23.334593 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841" (UID: "fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:01:23 crc kubenswrapper[4752]: I0122 11:01:23.410829 4752 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 22 11:01:23 crc kubenswrapper[4752]: I0122 11:01:23.410879 4752 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:01:23 crc kubenswrapper[4752]: I0122 11:01:23.410892 4752 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 11:01:23 crc kubenswrapper[4752]: I0122 11:01:23.410904 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5qlz\" (UniqueName: \"kubernetes.io/projected/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841-kube-api-access-l5qlz\") on node \"crc\" DevicePath \"\"" Jan 22 11:01:23 crc kubenswrapper[4752]: I0122 11:01:23.899832 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg"] Jan 22 11:01:23 crc kubenswrapper[4752]: E0122 11:01:23.900218 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c" containerName="keystone-cron" Jan 22 11:01:23 crc kubenswrapper[4752]: I0122 11:01:23.900236 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c" containerName="keystone-cron" Jan 22 11:01:23 crc kubenswrapper[4752]: E0122 11:01:23.900251 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 22 11:01:23 crc kubenswrapper[4752]: I0122 11:01:23.900257 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 22 11:01:23 crc kubenswrapper[4752]: I0122 11:01:23.900428 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 22 11:01:23 crc kubenswrapper[4752]: I0122 11:01:23.900447 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eacb218-b3b2-4eeb-b8b3-a27cbcbd202c" containerName="keystone-cron" Jan 22 11:01:23 crc kubenswrapper[4752]: I0122 11:01:23.901058 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg" Jan 22 11:01:23 crc kubenswrapper[4752]: I0122 11:01:23.908290 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 22 11:01:23 crc kubenswrapper[4752]: I0122 11:01:23.908574 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 22 11:01:23 crc kubenswrapper[4752]: I0122 11:01:23.908603 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841-inventory\") pod \"fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841\" (UID: \"fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841\") " Jan 22 11:01:23 crc kubenswrapper[4752]: I0122 11:01:23.911112 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg"] Jan 22 11:01:23 crc kubenswrapper[4752]: I0122 11:01:23.921441 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8cpbn" event={"ID":"fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841","Type":"ContainerDied","Data":"644c0830cb25fb15b2b34e93759133a7e1d44c86807ccd54a0448f5fb0bc839d"} Jan 22 11:01:23 crc kubenswrapper[4752]: I0122 11:01:23.921488 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="644c0830cb25fb15b2b34e93759133a7e1d44c86807ccd54a0448f5fb0bc839d" Jan 22 11:01:23 crc kubenswrapper[4752]: I0122 11:01:23.921552 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8cpbn" Jan 22 11:01:24 crc kubenswrapper[4752]: I0122 11:01:24.011140 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4c526cae-9401-4231-bad5-587cea70eb90-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg\" (UID: \"4c526cae-9401-4231-bad5-587cea70eb90\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg" Jan 22 11:01:24 crc kubenswrapper[4752]: I0122 11:01:24.011319 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c526cae-9401-4231-bad5-587cea70eb90-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg\" (UID: \"4c526cae-9401-4231-bad5-587cea70eb90\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg" Jan 22 11:01:24 crc kubenswrapper[4752]: I0122 11:01:24.011412 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4c526cae-9401-4231-bad5-587cea70eb90-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg\" (UID: \"4c526cae-9401-4231-bad5-587cea70eb90\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg" Jan 22 11:01:24 crc kubenswrapper[4752]: I0122 11:01:24.011478 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4c526cae-9401-4231-bad5-587cea70eb90-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg\" (UID: \"4c526cae-9401-4231-bad5-587cea70eb90\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg" Jan 22 11:01:24 crc kubenswrapper[4752]: I0122 11:01:24.011509 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb5f5\" (UniqueName: \"kubernetes.io/projected/4c526cae-9401-4231-bad5-587cea70eb90-kube-api-access-cb5f5\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg\" (UID: \"4c526cae-9401-4231-bad5-587cea70eb90\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg" Jan 22 11:01:24 crc kubenswrapper[4752]: I0122 11:01:24.011574 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c526cae-9401-4231-bad5-587cea70eb90-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg\" (UID: \"4c526cae-9401-4231-bad5-587cea70eb90\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg" Jan 22 11:01:24 crc kubenswrapper[4752]: I0122 11:01:24.115558 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4c526cae-9401-4231-bad5-587cea70eb90-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg\" (UID: \"4c526cae-9401-4231-bad5-587cea70eb90\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg" Jan 22 11:01:24 crc kubenswrapper[4752]: I0122 11:01:24.115659 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb5f5\" (UniqueName: \"kubernetes.io/projected/4c526cae-9401-4231-bad5-587cea70eb90-kube-api-access-cb5f5\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg\" (UID: \"4c526cae-9401-4231-bad5-587cea70eb90\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg" Jan 22 11:01:24 crc kubenswrapper[4752]: I0122 11:01:24.115747 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c526cae-9401-4231-bad5-587cea70eb90-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg\" (UID: \"4c526cae-9401-4231-bad5-587cea70eb90\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg" Jan 22 11:01:24 crc kubenswrapper[4752]: I0122 11:01:24.115845 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4c526cae-9401-4231-bad5-587cea70eb90-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg\" (UID: \"4c526cae-9401-4231-bad5-587cea70eb90\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg" Jan 22 11:01:24 crc kubenswrapper[4752]: I0122 11:01:24.116058 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c526cae-9401-4231-bad5-587cea70eb90-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg\" (UID: \"4c526cae-9401-4231-bad5-587cea70eb90\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg" Jan 22 11:01:24 crc kubenswrapper[4752]: I0122 11:01:24.116140 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4c526cae-9401-4231-bad5-587cea70eb90-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg\" (UID: \"4c526cae-9401-4231-bad5-587cea70eb90\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg" Jan 22 11:01:24 crc kubenswrapper[4752]: I0122 11:01:24.120147 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4c526cae-9401-4231-bad5-587cea70eb90-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg\" (UID: \"4c526cae-9401-4231-bad5-587cea70eb90\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg" Jan 22 11:01:24 crc kubenswrapper[4752]: I0122 11:01:24.122614 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4c526cae-9401-4231-bad5-587cea70eb90-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg\" (UID: \"4c526cae-9401-4231-bad5-587cea70eb90\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg" Jan 22 11:01:24 crc kubenswrapper[4752]: I0122 11:01:24.129231 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c526cae-9401-4231-bad5-587cea70eb90-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg\" (UID: \"4c526cae-9401-4231-bad5-587cea70eb90\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg" Jan 22 11:01:24 crc kubenswrapper[4752]: I0122 11:01:24.132244 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4c526cae-9401-4231-bad5-587cea70eb90-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg\" (UID: \"4c526cae-9401-4231-bad5-587cea70eb90\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg" Jan 22 11:01:24 crc kubenswrapper[4752]: I0122 11:01:24.136020 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c526cae-9401-4231-bad5-587cea70eb90-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg\" (UID: \"4c526cae-9401-4231-bad5-587cea70eb90\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg" Jan 22 11:01:24 crc kubenswrapper[4752]: I0122 11:01:24.141990 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb5f5\" (UniqueName: \"kubernetes.io/projected/4c526cae-9401-4231-bad5-587cea70eb90-kube-api-access-cb5f5\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg\" (UID: \"4c526cae-9401-4231-bad5-587cea70eb90\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg" Jan 22 11:01:24 crc kubenswrapper[4752]: I0122 11:01:24.269216 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg" Jan 22 11:01:24 crc kubenswrapper[4752]: I0122 11:01:24.288760 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841-inventory" (OuterVolumeSpecName: "inventory") pod "fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841" (UID: "fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:01:24 crc kubenswrapper[4752]: I0122 11:01:24.337596 4752 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe4d7c69-a6c4-4b6d-b4ce-ccca41a2b841-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 11:01:24 crc kubenswrapper[4752]: I0122 11:01:24.842198 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg"] Jan 22 11:01:24 crc kubenswrapper[4752]: I0122 11:01:24.932140 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg" event={"ID":"4c526cae-9401-4231-bad5-587cea70eb90","Type":"ContainerStarted","Data":"3c0b86205bdf791b7c7786b54c0ec71c0af62525d0f653807696a6c3bcf9f517"} Jan 22 11:01:25 crc kubenswrapper[4752]: I0122 11:01:25.943887 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg" event={"ID":"4c526cae-9401-4231-bad5-587cea70eb90","Type":"ContainerStarted","Data":"8f6be5230713c8f4b86848f3f4edddf2ec54a8ff3563f97610ccfdfc92670c56"} Jan 22 11:01:25 crc kubenswrapper[4752]: I0122 11:01:25.963900 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg" podStartSLOduration=2.493349983 podStartE2EDuration="2.963841835s" podCreationTimestamp="2026-01-22 11:01:23 +0000 UTC" firstStartedPulling="2026-01-22 11:01:24.844509326 +0000 UTC m=+2164.074452244" lastFinishedPulling="2026-01-22 11:01:25.315001178 +0000 UTC m=+2164.544944096" observedRunningTime="2026-01-22 11:01:25.959819238 +0000 UTC m=+2165.189762146" watchObservedRunningTime="2026-01-22 11:01:25.963841835 +0000 UTC m=+2165.193784753" Jan 22 11:01:27 crc kubenswrapper[4752]: I0122 11:01:27.723945 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:01:27 crc kubenswrapper[4752]: I0122 11:01:27.724308 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:01:57 crc kubenswrapper[4752]: I0122 11:01:57.724085 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:01:57 crc kubenswrapper[4752]: I0122 11:01:57.724675 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:01:57 crc kubenswrapper[4752]: I0122 11:01:57.724734 4752 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 11:01:57 crc kubenswrapper[4752]: I0122 11:01:57.725830 4752 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"70cbafe9440260ad6827dda2f3848d07643b15d146fa166af3af676a25a18d68"} pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 11:01:57 crc kubenswrapper[4752]: I0122 11:01:57.725972 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" containerID="cri-o://70cbafe9440260ad6827dda2f3848d07643b15d146fa166af3af676a25a18d68" gracePeriod=600 Jan 22 11:01:58 crc kubenswrapper[4752]: I0122 11:01:58.259033 4752 generic.go:334] "Generic (PLEG): container finished" podID="eb8df70c-9474-4827-8831-f39fc6883d79" containerID="70cbafe9440260ad6827dda2f3848d07643b15d146fa166af3af676a25a18d68" exitCode=0 Jan 22 11:01:58 crc kubenswrapper[4752]: I0122 11:01:58.259456 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerDied","Data":"70cbafe9440260ad6827dda2f3848d07643b15d146fa166af3af676a25a18d68"} Jan 22 11:01:58 crc kubenswrapper[4752]: I0122 11:01:58.259489 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerStarted","Data":"d8d4a0367b441aa318ea13a556d1d9c716143e5abe9c51339a1b6127faa71c61"} Jan 22 11:01:58 crc kubenswrapper[4752]: I0122 11:01:58.259511 4752 scope.go:117] "RemoveContainer" containerID="95f49b8d3b2a29d6e49e059f56ddb1725ec652d91fd4d10fdf2aa8c455daa877" Jan 22 11:02:20 crc kubenswrapper[4752]: I0122 11:02:20.487242 4752 generic.go:334] "Generic (PLEG): container finished" podID="4c526cae-9401-4231-bad5-587cea70eb90" containerID="8f6be5230713c8f4b86848f3f4edddf2ec54a8ff3563f97610ccfdfc92670c56" exitCode=0 Jan 22 11:02:20 crc kubenswrapper[4752]: I0122 11:02:20.487387 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg" event={"ID":"4c526cae-9401-4231-bad5-587cea70eb90","Type":"ContainerDied","Data":"8f6be5230713c8f4b86848f3f4edddf2ec54a8ff3563f97610ccfdfc92670c56"} Jan 22 11:02:21 crc kubenswrapper[4752]: I0122 11:02:21.928098 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg" Jan 22 11:02:21 crc kubenswrapper[4752]: I0122 11:02:21.936904 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4c526cae-9401-4231-bad5-587cea70eb90-neutron-ovn-metadata-agent-neutron-config-0\") pod \"4c526cae-9401-4231-bad5-587cea70eb90\" (UID: \"4c526cae-9401-4231-bad5-587cea70eb90\") " Jan 22 11:02:21 crc kubenswrapper[4752]: I0122 11:02:21.937035 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c526cae-9401-4231-bad5-587cea70eb90-inventory\") pod \"4c526cae-9401-4231-bad5-587cea70eb90\" (UID: \"4c526cae-9401-4231-bad5-587cea70eb90\") " Jan 22 11:02:21 crc kubenswrapper[4752]: I0122 11:02:21.937064 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4c526cae-9401-4231-bad5-587cea70eb90-ssh-key-openstack-edpm-ipam\") pod \"4c526cae-9401-4231-bad5-587cea70eb90\" (UID: \"4c526cae-9401-4231-bad5-587cea70eb90\") " Jan 22 11:02:21 crc kubenswrapper[4752]: I0122 11:02:21.937111 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c526cae-9401-4231-bad5-587cea70eb90-neutron-metadata-combined-ca-bundle\") pod \"4c526cae-9401-4231-bad5-587cea70eb90\" (UID: \"4c526cae-9401-4231-bad5-587cea70eb90\") " Jan 22 11:02:21 crc kubenswrapper[4752]: I0122 11:02:21.937174 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb5f5\" (UniqueName: \"kubernetes.io/projected/4c526cae-9401-4231-bad5-587cea70eb90-kube-api-access-cb5f5\") pod \"4c526cae-9401-4231-bad5-587cea70eb90\" (UID: \"4c526cae-9401-4231-bad5-587cea70eb90\") " Jan 22 11:02:21 crc kubenswrapper[4752]: I0122 11:02:21.937254 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4c526cae-9401-4231-bad5-587cea70eb90-nova-metadata-neutron-config-0\") pod \"4c526cae-9401-4231-bad5-587cea70eb90\" (UID: \"4c526cae-9401-4231-bad5-587cea70eb90\") " Jan 22 11:02:21 crc kubenswrapper[4752]: I0122 11:02:21.943239 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c526cae-9401-4231-bad5-587cea70eb90-kube-api-access-cb5f5" (OuterVolumeSpecName: "kube-api-access-cb5f5") pod "4c526cae-9401-4231-bad5-587cea70eb90" (UID: "4c526cae-9401-4231-bad5-587cea70eb90"). InnerVolumeSpecName "kube-api-access-cb5f5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:02:21 crc kubenswrapper[4752]: I0122 11:02:21.954627 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c526cae-9401-4231-bad5-587cea70eb90-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "4c526cae-9401-4231-bad5-587cea70eb90" (UID: "4c526cae-9401-4231-bad5-587cea70eb90"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:02:21 crc kubenswrapper[4752]: I0122 11:02:21.976963 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c526cae-9401-4231-bad5-587cea70eb90-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4c526cae-9401-4231-bad5-587cea70eb90" (UID: "4c526cae-9401-4231-bad5-587cea70eb90"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:02:21 crc kubenswrapper[4752]: I0122 11:02:21.982910 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c526cae-9401-4231-bad5-587cea70eb90-inventory" (OuterVolumeSpecName: "inventory") pod "4c526cae-9401-4231-bad5-587cea70eb90" (UID: "4c526cae-9401-4231-bad5-587cea70eb90"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:02:21 crc kubenswrapper[4752]: I0122 11:02:21.984197 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c526cae-9401-4231-bad5-587cea70eb90-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "4c526cae-9401-4231-bad5-587cea70eb90" (UID: "4c526cae-9401-4231-bad5-587cea70eb90"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:02:21 crc kubenswrapper[4752]: I0122 11:02:21.996446 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c526cae-9401-4231-bad5-587cea70eb90-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "4c526cae-9401-4231-bad5-587cea70eb90" (UID: "4c526cae-9401-4231-bad5-587cea70eb90"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.039114 4752 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4c526cae-9401-4231-bad5-587cea70eb90-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.039154 4752 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4c526cae-9401-4231-bad5-587cea70eb90-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.039171 4752 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c526cae-9401-4231-bad5-587cea70eb90-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.039185 4752 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4c526cae-9401-4231-bad5-587cea70eb90-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.039198 4752 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c526cae-9401-4231-bad5-587cea70eb90-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.039209 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb5f5\" (UniqueName: \"kubernetes.io/projected/4c526cae-9401-4231-bad5-587cea70eb90-kube-api-access-cb5f5\") on node \"crc\" DevicePath \"\"" Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.509175 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg" event={"ID":"4c526cae-9401-4231-bad5-587cea70eb90","Type":"ContainerDied","Data":"3c0b86205bdf791b7c7786b54c0ec71c0af62525d0f653807696a6c3bcf9f517"} Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.509488 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c0b86205bdf791b7c7786b54c0ec71c0af62525d0f653807696a6c3bcf9f517" Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.509222 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hwcrg" Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.611937 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq"] Jan 22 11:02:22 crc kubenswrapper[4752]: E0122 11:02:22.612432 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c526cae-9401-4231-bad5-587cea70eb90" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.612456 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c526cae-9401-4231-bad5-587cea70eb90" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.612660 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c526cae-9401-4231-bad5-587cea70eb90" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.613376 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq" Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.615551 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.616158 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.616202 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.616798 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.617076 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j8c6q" Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.631414 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq"] Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.658600 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68346080-e43b-4ba0-8ddb-b15da9e7e5bb-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq\" (UID: \"68346080-e43b-4ba0-8ddb-b15da9e7e5bb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq" Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.658752 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7ks5\" (UniqueName: \"kubernetes.io/projected/68346080-e43b-4ba0-8ddb-b15da9e7e5bb-kube-api-access-d7ks5\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq\" (UID: \"68346080-e43b-4ba0-8ddb-b15da9e7e5bb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq" Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.658846 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/68346080-e43b-4ba0-8ddb-b15da9e7e5bb-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq\" (UID: \"68346080-e43b-4ba0-8ddb-b15da9e7e5bb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq" Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.658937 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68346080-e43b-4ba0-8ddb-b15da9e7e5bb-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq\" (UID: \"68346080-e43b-4ba0-8ddb-b15da9e7e5bb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq" Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.659019 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68346080-e43b-4ba0-8ddb-b15da9e7e5bb-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq\" (UID: \"68346080-e43b-4ba0-8ddb-b15da9e7e5bb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq" Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.761507 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68346080-e43b-4ba0-8ddb-b15da9e7e5bb-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq\" (UID: \"68346080-e43b-4ba0-8ddb-b15da9e7e5bb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq" Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.761614 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68346080-e43b-4ba0-8ddb-b15da9e7e5bb-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq\" (UID: \"68346080-e43b-4ba0-8ddb-b15da9e7e5bb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq" Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.761671 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68346080-e43b-4ba0-8ddb-b15da9e7e5bb-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq\" (UID: \"68346080-e43b-4ba0-8ddb-b15da9e7e5bb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq" Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.761765 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ks5\" (UniqueName: \"kubernetes.io/projected/68346080-e43b-4ba0-8ddb-b15da9e7e5bb-kube-api-access-d7ks5\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq\" (UID: \"68346080-e43b-4ba0-8ddb-b15da9e7e5bb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq" Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.761842 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/68346080-e43b-4ba0-8ddb-b15da9e7e5bb-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq\" (UID: \"68346080-e43b-4ba0-8ddb-b15da9e7e5bb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq" Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.766818 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68346080-e43b-4ba0-8ddb-b15da9e7e5bb-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq\" (UID: \"68346080-e43b-4ba0-8ddb-b15da9e7e5bb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq" Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.766818 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/68346080-e43b-4ba0-8ddb-b15da9e7e5bb-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq\" (UID: \"68346080-e43b-4ba0-8ddb-b15da9e7e5bb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq" Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.767312 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68346080-e43b-4ba0-8ddb-b15da9e7e5bb-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq\" (UID: \"68346080-e43b-4ba0-8ddb-b15da9e7e5bb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq" Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.783245 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68346080-e43b-4ba0-8ddb-b15da9e7e5bb-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq\" (UID: \"68346080-e43b-4ba0-8ddb-b15da9e7e5bb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq" Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.783687 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7ks5\" (UniqueName: \"kubernetes.io/projected/68346080-e43b-4ba0-8ddb-b15da9e7e5bb-kube-api-access-d7ks5\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq\" (UID: \"68346080-e43b-4ba0-8ddb-b15da9e7e5bb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq" Jan 22 11:02:22 crc kubenswrapper[4752]: I0122 11:02:22.928114 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq" Jan 22 11:02:23 crc kubenswrapper[4752]: I0122 11:02:23.517577 4752 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 11:02:23 crc kubenswrapper[4752]: I0122 11:02:23.520140 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq"] Jan 22 11:02:24 crc kubenswrapper[4752]: I0122 11:02:24.539136 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq" event={"ID":"68346080-e43b-4ba0-8ddb-b15da9e7e5bb","Type":"ContainerStarted","Data":"d3f230f901013c87614144613e3510b2ee8fa6c358a8b96ba5916d46f866063d"} Jan 22 11:02:25 crc kubenswrapper[4752]: I0122 11:02:25.552464 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq" event={"ID":"68346080-e43b-4ba0-8ddb-b15da9e7e5bb","Type":"ContainerStarted","Data":"4eae733ca6e3a44f3be5712ece616e3f9d0deaba39a01b222f37cbf9236c19b7"} Jan 22 11:02:25 crc kubenswrapper[4752]: I0122 11:02:25.576574 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq" podStartSLOduration=3.155500879 podStartE2EDuration="3.576548487s" podCreationTimestamp="2026-01-22 11:02:22 +0000 UTC" firstStartedPulling="2026-01-22 11:02:23.517373611 +0000 UTC m=+2222.747316519" lastFinishedPulling="2026-01-22 11:02:23.938421229 +0000 UTC m=+2223.168364127" observedRunningTime="2026-01-22 11:02:25.572369727 +0000 UTC m=+2224.802312705" watchObservedRunningTime="2026-01-22 11:02:25.576548487 +0000 UTC m=+2224.806491395" Jan 22 11:02:43 crc kubenswrapper[4752]: I0122 11:02:43.355573 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x4pg6"] Jan 22 11:02:43 crc kubenswrapper[4752]: I0122 11:02:43.358882 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x4pg6" Jan 22 11:02:43 crc kubenswrapper[4752]: I0122 11:02:43.367645 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x4pg6"] Jan 22 11:02:43 crc kubenswrapper[4752]: I0122 11:02:43.374974 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xpbm\" (UniqueName: \"kubernetes.io/projected/639c9594-c160-4504-80a2-3c3475f0d2fa-kube-api-access-8xpbm\") pod \"community-operators-x4pg6\" (UID: \"639c9594-c160-4504-80a2-3c3475f0d2fa\") " pod="openshift-marketplace/community-operators-x4pg6" Jan 22 11:02:43 crc kubenswrapper[4752]: I0122 11:02:43.375018 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/639c9594-c160-4504-80a2-3c3475f0d2fa-catalog-content\") pod \"community-operators-x4pg6\" (UID: \"639c9594-c160-4504-80a2-3c3475f0d2fa\") " pod="openshift-marketplace/community-operators-x4pg6" Jan 22 11:02:43 crc kubenswrapper[4752]: I0122 11:02:43.375075 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/639c9594-c160-4504-80a2-3c3475f0d2fa-utilities\") pod \"community-operators-x4pg6\" (UID: \"639c9594-c160-4504-80a2-3c3475f0d2fa\") " pod="openshift-marketplace/community-operators-x4pg6" Jan 22 11:02:43 crc kubenswrapper[4752]: I0122 11:02:43.476712 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xpbm\" (UniqueName: \"kubernetes.io/projected/639c9594-c160-4504-80a2-3c3475f0d2fa-kube-api-access-8xpbm\") pod \"community-operators-x4pg6\" (UID: \"639c9594-c160-4504-80a2-3c3475f0d2fa\") " pod="openshift-marketplace/community-operators-x4pg6" Jan 22 11:02:43 crc kubenswrapper[4752]: I0122 11:02:43.476770 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/639c9594-c160-4504-80a2-3c3475f0d2fa-catalog-content\") pod \"community-operators-x4pg6\" (UID: \"639c9594-c160-4504-80a2-3c3475f0d2fa\") " pod="openshift-marketplace/community-operators-x4pg6" Jan 22 11:02:43 crc kubenswrapper[4752]: I0122 11:02:43.476835 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/639c9594-c160-4504-80a2-3c3475f0d2fa-utilities\") pod \"community-operators-x4pg6\" (UID: \"639c9594-c160-4504-80a2-3c3475f0d2fa\") " pod="openshift-marketplace/community-operators-x4pg6" Jan 22 11:02:43 crc kubenswrapper[4752]: I0122 11:02:43.477362 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/639c9594-c160-4504-80a2-3c3475f0d2fa-catalog-content\") pod \"community-operators-x4pg6\" (UID: \"639c9594-c160-4504-80a2-3c3475f0d2fa\") " pod="openshift-marketplace/community-operators-x4pg6" Jan 22 11:02:43 crc kubenswrapper[4752]: I0122 11:02:43.477410 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/639c9594-c160-4504-80a2-3c3475f0d2fa-utilities\") pod \"community-operators-x4pg6\" (UID: \"639c9594-c160-4504-80a2-3c3475f0d2fa\") " pod="openshift-marketplace/community-operators-x4pg6" Jan 22 11:02:43 crc kubenswrapper[4752]: I0122 11:02:43.498509 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xpbm\" (UniqueName: \"kubernetes.io/projected/639c9594-c160-4504-80a2-3c3475f0d2fa-kube-api-access-8xpbm\") pod \"community-operators-x4pg6\" (UID: \"639c9594-c160-4504-80a2-3c3475f0d2fa\") " pod="openshift-marketplace/community-operators-x4pg6" Jan 22 11:02:43 crc kubenswrapper[4752]: I0122 11:02:43.693271 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x4pg6" Jan 22 11:02:44 crc kubenswrapper[4752]: I0122 11:02:44.195926 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x4pg6"] Jan 22 11:02:44 crc kubenswrapper[4752]: I0122 11:02:44.751222 4752 generic.go:334] "Generic (PLEG): container finished" podID="639c9594-c160-4504-80a2-3c3475f0d2fa" containerID="e261137abe8ab7d161df18feacb80d53a43608a245a935299e93087a640b32d2" exitCode=0 Jan 22 11:02:44 crc kubenswrapper[4752]: I0122 11:02:44.751315 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4pg6" event={"ID":"639c9594-c160-4504-80a2-3c3475f0d2fa","Type":"ContainerDied","Data":"e261137abe8ab7d161df18feacb80d53a43608a245a935299e93087a640b32d2"} Jan 22 11:02:44 crc kubenswrapper[4752]: I0122 11:02:44.751546 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4pg6" event={"ID":"639c9594-c160-4504-80a2-3c3475f0d2fa","Type":"ContainerStarted","Data":"5ac11886637b76389de2877c69122fa16dbfe6d22c82643faf9e3443e129888e"} Jan 22 11:02:45 crc kubenswrapper[4752]: I0122 11:02:45.763693 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4pg6" event={"ID":"639c9594-c160-4504-80a2-3c3475f0d2fa","Type":"ContainerStarted","Data":"fc00d7df9ef65355094c523b0ed918a6abe7a09145469c0deab2b88befa5585f"} Jan 22 11:02:46 crc kubenswrapper[4752]: I0122 11:02:46.779421 4752 generic.go:334] "Generic (PLEG): container finished" podID="639c9594-c160-4504-80a2-3c3475f0d2fa" containerID="fc00d7df9ef65355094c523b0ed918a6abe7a09145469c0deab2b88befa5585f" exitCode=0 Jan 22 11:02:46 crc kubenswrapper[4752]: I0122 11:02:46.779495 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4pg6" event={"ID":"639c9594-c160-4504-80a2-3c3475f0d2fa","Type":"ContainerDied","Data":"fc00d7df9ef65355094c523b0ed918a6abe7a09145469c0deab2b88befa5585f"} Jan 22 11:02:47 crc kubenswrapper[4752]: I0122 11:02:47.792403 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4pg6" event={"ID":"639c9594-c160-4504-80a2-3c3475f0d2fa","Type":"ContainerStarted","Data":"cf6035df02be0ca12e1c8d87ff8771094beef582b6c6fbe4aa98ac407238d2fb"} Jan 22 11:02:47 crc kubenswrapper[4752]: I0122 11:02:47.813275 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x4pg6" podStartSLOduration=2.292108398 podStartE2EDuration="4.813256989s" podCreationTimestamp="2026-01-22 11:02:43 +0000 UTC" firstStartedPulling="2026-01-22 11:02:44.754036723 +0000 UTC m=+2243.983979641" lastFinishedPulling="2026-01-22 11:02:47.275185314 +0000 UTC m=+2246.505128232" observedRunningTime="2026-01-22 11:02:47.810387074 +0000 UTC m=+2247.040329982" watchObservedRunningTime="2026-01-22 11:02:47.813256989 +0000 UTC m=+2247.043199897" Jan 22 11:02:53 crc kubenswrapper[4752]: I0122 11:02:53.694009 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x4pg6" Jan 22 11:02:53 crc kubenswrapper[4752]: I0122 11:02:53.694537 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x4pg6" Jan 22 11:02:53 crc kubenswrapper[4752]: I0122 11:02:53.741182 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x4pg6" Jan 22 11:02:53 crc kubenswrapper[4752]: I0122 11:02:53.917570 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x4pg6" Jan 22 11:02:53 crc kubenswrapper[4752]: I0122 11:02:53.977594 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x4pg6"] Jan 22 11:02:55 crc kubenswrapper[4752]: I0122 11:02:55.880587 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x4pg6" podUID="639c9594-c160-4504-80a2-3c3475f0d2fa" containerName="registry-server" containerID="cri-o://cf6035df02be0ca12e1c8d87ff8771094beef582b6c6fbe4aa98ac407238d2fb" gracePeriod=2 Jan 22 11:02:56 crc kubenswrapper[4752]: I0122 11:02:56.909971 4752 generic.go:334] "Generic (PLEG): container finished" podID="639c9594-c160-4504-80a2-3c3475f0d2fa" containerID="cf6035df02be0ca12e1c8d87ff8771094beef582b6c6fbe4aa98ac407238d2fb" exitCode=0 Jan 22 11:02:56 crc kubenswrapper[4752]: I0122 11:02:56.910036 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4pg6" event={"ID":"639c9594-c160-4504-80a2-3c3475f0d2fa","Type":"ContainerDied","Data":"cf6035df02be0ca12e1c8d87ff8771094beef582b6c6fbe4aa98ac407238d2fb"} Jan 22 11:02:57 crc kubenswrapper[4752]: I0122 11:02:57.157919 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x4pg6" Jan 22 11:02:57 crc kubenswrapper[4752]: I0122 11:02:57.264284 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/639c9594-c160-4504-80a2-3c3475f0d2fa-catalog-content\") pod \"639c9594-c160-4504-80a2-3c3475f0d2fa\" (UID: \"639c9594-c160-4504-80a2-3c3475f0d2fa\") " Jan 22 11:02:57 crc kubenswrapper[4752]: I0122 11:02:57.265314 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/639c9594-c160-4504-80a2-3c3475f0d2fa-utilities\") pod \"639c9594-c160-4504-80a2-3c3475f0d2fa\" (UID: \"639c9594-c160-4504-80a2-3c3475f0d2fa\") " Jan 22 11:02:57 crc kubenswrapper[4752]: I0122 11:02:57.265588 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xpbm\" (UniqueName: \"kubernetes.io/projected/639c9594-c160-4504-80a2-3c3475f0d2fa-kube-api-access-8xpbm\") pod \"639c9594-c160-4504-80a2-3c3475f0d2fa\" (UID: \"639c9594-c160-4504-80a2-3c3475f0d2fa\") " Jan 22 11:02:57 crc kubenswrapper[4752]: I0122 11:02:57.266241 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/639c9594-c160-4504-80a2-3c3475f0d2fa-utilities" (OuterVolumeSpecName: "utilities") pod "639c9594-c160-4504-80a2-3c3475f0d2fa" (UID: "639c9594-c160-4504-80a2-3c3475f0d2fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:02:57 crc kubenswrapper[4752]: I0122 11:02:57.268110 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/639c9594-c160-4504-80a2-3c3475f0d2fa-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:02:57 crc kubenswrapper[4752]: I0122 11:02:57.276335 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/639c9594-c160-4504-80a2-3c3475f0d2fa-kube-api-access-8xpbm" (OuterVolumeSpecName: "kube-api-access-8xpbm") pod "639c9594-c160-4504-80a2-3c3475f0d2fa" (UID: "639c9594-c160-4504-80a2-3c3475f0d2fa"). InnerVolumeSpecName "kube-api-access-8xpbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:02:57 crc kubenswrapper[4752]: I0122 11:02:57.326465 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/639c9594-c160-4504-80a2-3c3475f0d2fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "639c9594-c160-4504-80a2-3c3475f0d2fa" (UID: "639c9594-c160-4504-80a2-3c3475f0d2fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:02:57 crc kubenswrapper[4752]: I0122 11:02:57.371212 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/639c9594-c160-4504-80a2-3c3475f0d2fa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:02:57 crc kubenswrapper[4752]: I0122 11:02:57.371258 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xpbm\" (UniqueName: \"kubernetes.io/projected/639c9594-c160-4504-80a2-3c3475f0d2fa-kube-api-access-8xpbm\") on node \"crc\" DevicePath \"\"" Jan 22 11:02:57 crc kubenswrapper[4752]: I0122 11:02:57.920606 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4pg6" event={"ID":"639c9594-c160-4504-80a2-3c3475f0d2fa","Type":"ContainerDied","Data":"5ac11886637b76389de2877c69122fa16dbfe6d22c82643faf9e3443e129888e"} Jan 22 11:02:57 crc kubenswrapper[4752]: I0122 11:02:57.920679 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x4pg6" Jan 22 11:02:57 crc kubenswrapper[4752]: I0122 11:02:57.920694 4752 scope.go:117] "RemoveContainer" containerID="cf6035df02be0ca12e1c8d87ff8771094beef582b6c6fbe4aa98ac407238d2fb" Jan 22 11:02:57 crc kubenswrapper[4752]: I0122 11:02:57.946729 4752 scope.go:117] "RemoveContainer" containerID="fc00d7df9ef65355094c523b0ed918a6abe7a09145469c0deab2b88befa5585f" Jan 22 11:02:57 crc kubenswrapper[4752]: I0122 11:02:57.972643 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x4pg6"] Jan 22 11:02:57 crc kubenswrapper[4752]: I0122 11:02:57.991154 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x4pg6"] Jan 22 11:02:57 crc kubenswrapper[4752]: I0122 11:02:57.999405 4752 scope.go:117] "RemoveContainer" containerID="e261137abe8ab7d161df18feacb80d53a43608a245a935299e93087a640b32d2" Jan 22 11:02:59 crc kubenswrapper[4752]: I0122 11:02:59.116512 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="639c9594-c160-4504-80a2-3c3475f0d2fa" path="/var/lib/kubelet/pods/639c9594-c160-4504-80a2-3c3475f0d2fa/volumes" Jan 22 11:03:02 crc kubenswrapper[4752]: I0122 11:03:02.552982 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kgwtf"] Jan 22 11:03:02 crc kubenswrapper[4752]: E0122 11:03:02.555689 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="639c9594-c160-4504-80a2-3c3475f0d2fa" containerName="extract-content" Jan 22 11:03:02 crc kubenswrapper[4752]: I0122 11:03:02.555806 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="639c9594-c160-4504-80a2-3c3475f0d2fa" containerName="extract-content" Jan 22 11:03:02 crc kubenswrapper[4752]: E0122 11:03:02.555934 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="639c9594-c160-4504-80a2-3c3475f0d2fa" containerName="registry-server" Jan 22 11:03:02 crc kubenswrapper[4752]: I0122 11:03:02.556021 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="639c9594-c160-4504-80a2-3c3475f0d2fa" containerName="registry-server" Jan 22 11:03:02 crc kubenswrapper[4752]: E0122 11:03:02.556148 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="639c9594-c160-4504-80a2-3c3475f0d2fa" containerName="extract-utilities" Jan 22 11:03:02 crc kubenswrapper[4752]: I0122 11:03:02.556237 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="639c9594-c160-4504-80a2-3c3475f0d2fa" containerName="extract-utilities" Jan 22 11:03:02 crc kubenswrapper[4752]: I0122 11:03:02.556560 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="639c9594-c160-4504-80a2-3c3475f0d2fa" containerName="registry-server" Jan 22 11:03:02 crc kubenswrapper[4752]: I0122 11:03:02.558568 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kgwtf" Jan 22 11:03:02 crc kubenswrapper[4752]: I0122 11:03:02.581657 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kgwtf"] Jan 22 11:03:02 crc kubenswrapper[4752]: I0122 11:03:02.593331 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d26af3e3-383d-4881-8d3d-a5e96dc4cb17-utilities\") pod \"redhat-marketplace-kgwtf\" (UID: \"d26af3e3-383d-4881-8d3d-a5e96dc4cb17\") " pod="openshift-marketplace/redhat-marketplace-kgwtf" Jan 22 11:03:02 crc kubenswrapper[4752]: I0122 11:03:02.593430 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48gkd\" (UniqueName: \"kubernetes.io/projected/d26af3e3-383d-4881-8d3d-a5e96dc4cb17-kube-api-access-48gkd\") pod \"redhat-marketplace-kgwtf\" (UID: \"d26af3e3-383d-4881-8d3d-a5e96dc4cb17\") " pod="openshift-marketplace/redhat-marketplace-kgwtf" Jan 22 11:03:02 crc kubenswrapper[4752]: I0122 11:03:02.593956 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d26af3e3-383d-4881-8d3d-a5e96dc4cb17-catalog-content\") pod \"redhat-marketplace-kgwtf\" (UID: \"d26af3e3-383d-4881-8d3d-a5e96dc4cb17\") " pod="openshift-marketplace/redhat-marketplace-kgwtf" Jan 22 11:03:02 crc kubenswrapper[4752]: I0122 11:03:02.696417 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48gkd\" (UniqueName: \"kubernetes.io/projected/d26af3e3-383d-4881-8d3d-a5e96dc4cb17-kube-api-access-48gkd\") pod \"redhat-marketplace-kgwtf\" (UID: \"d26af3e3-383d-4881-8d3d-a5e96dc4cb17\") " pod="openshift-marketplace/redhat-marketplace-kgwtf" Jan 22 11:03:02 crc kubenswrapper[4752]: I0122 11:03:02.696613 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d26af3e3-383d-4881-8d3d-a5e96dc4cb17-catalog-content\") pod \"redhat-marketplace-kgwtf\" (UID: \"d26af3e3-383d-4881-8d3d-a5e96dc4cb17\") " pod="openshift-marketplace/redhat-marketplace-kgwtf" Jan 22 11:03:02 crc kubenswrapper[4752]: I0122 11:03:02.696717 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d26af3e3-383d-4881-8d3d-a5e96dc4cb17-utilities\") pod \"redhat-marketplace-kgwtf\" (UID: \"d26af3e3-383d-4881-8d3d-a5e96dc4cb17\") " pod="openshift-marketplace/redhat-marketplace-kgwtf" Jan 22 11:03:02 crc kubenswrapper[4752]: I0122 11:03:02.697472 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d26af3e3-383d-4881-8d3d-a5e96dc4cb17-utilities\") pod \"redhat-marketplace-kgwtf\" (UID: \"d26af3e3-383d-4881-8d3d-a5e96dc4cb17\") " pod="openshift-marketplace/redhat-marketplace-kgwtf" Jan 22 11:03:02 crc kubenswrapper[4752]: I0122 11:03:02.697528 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d26af3e3-383d-4881-8d3d-a5e96dc4cb17-catalog-content\") pod \"redhat-marketplace-kgwtf\" (UID: \"d26af3e3-383d-4881-8d3d-a5e96dc4cb17\") " pod="openshift-marketplace/redhat-marketplace-kgwtf" Jan 22 11:03:02 crc kubenswrapper[4752]: I0122 11:03:02.720324 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48gkd\" (UniqueName: \"kubernetes.io/projected/d26af3e3-383d-4881-8d3d-a5e96dc4cb17-kube-api-access-48gkd\") pod \"redhat-marketplace-kgwtf\" (UID: \"d26af3e3-383d-4881-8d3d-a5e96dc4cb17\") " pod="openshift-marketplace/redhat-marketplace-kgwtf" Jan 22 11:03:02 crc kubenswrapper[4752]: I0122 11:03:02.896026 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kgwtf" Jan 22 11:03:03 crc kubenswrapper[4752]: I0122 11:03:03.493387 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kgwtf"] Jan 22 11:03:03 crc kubenswrapper[4752]: I0122 11:03:03.987805 4752 generic.go:334] "Generic (PLEG): container finished" podID="d26af3e3-383d-4881-8d3d-a5e96dc4cb17" containerID="c0a6338d242176189ad96e1a9d8edb0281f424fd315c5b6270dba8fcafa00b53" exitCode=0 Jan 22 11:03:03 crc kubenswrapper[4752]: I0122 11:03:03.987930 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kgwtf" event={"ID":"d26af3e3-383d-4881-8d3d-a5e96dc4cb17","Type":"ContainerDied","Data":"c0a6338d242176189ad96e1a9d8edb0281f424fd315c5b6270dba8fcafa00b53"} Jan 22 11:03:03 crc kubenswrapper[4752]: I0122 11:03:03.988124 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kgwtf" event={"ID":"d26af3e3-383d-4881-8d3d-a5e96dc4cb17","Type":"ContainerStarted","Data":"ed3527ce7e946bbb3690caf07a2c5ab9ce62bc1aeb9dce3d506a6f0792a45ae0"} Jan 22 11:03:06 crc kubenswrapper[4752]: I0122 11:03:06.009913 4752 generic.go:334] "Generic (PLEG): container finished" podID="d26af3e3-383d-4881-8d3d-a5e96dc4cb17" containerID="ede39a2f2a4a8594ffe5cc0046bf0c924d462a6a14565b798b6beadc08c530fe" exitCode=0 Jan 22 11:03:06 crc kubenswrapper[4752]: I0122 11:03:06.009953 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kgwtf" event={"ID":"d26af3e3-383d-4881-8d3d-a5e96dc4cb17","Type":"ContainerDied","Data":"ede39a2f2a4a8594ffe5cc0046bf0c924d462a6a14565b798b6beadc08c530fe"} Jan 22 11:03:07 crc kubenswrapper[4752]: I0122 11:03:07.023289 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kgwtf" event={"ID":"d26af3e3-383d-4881-8d3d-a5e96dc4cb17","Type":"ContainerStarted","Data":"00915603cc412b53fe34c5c1cbe059c00a9988aa9e6a6867cd15a14bf078e2bf"} Jan 22 11:03:07 crc kubenswrapper[4752]: I0122 11:03:07.050289 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kgwtf" podStartSLOduration=2.380339438 podStartE2EDuration="5.050263284s" podCreationTimestamp="2026-01-22 11:03:02 +0000 UTC" firstStartedPulling="2026-01-22 11:03:03.993377809 +0000 UTC m=+2263.223320757" lastFinishedPulling="2026-01-22 11:03:06.663301695 +0000 UTC m=+2265.893244603" observedRunningTime="2026-01-22 11:03:07.045226261 +0000 UTC m=+2266.275169169" watchObservedRunningTime="2026-01-22 11:03:07.050263284 +0000 UTC m=+2266.280206192" Jan 22 11:03:08 crc kubenswrapper[4752]: I0122 11:03:08.925753 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x984t"] Jan 22 11:03:08 crc kubenswrapper[4752]: I0122 11:03:08.928994 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x984t" Jan 22 11:03:08 crc kubenswrapper[4752]: I0122 11:03:08.941122 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x984t"] Jan 22 11:03:09 crc kubenswrapper[4752]: I0122 11:03:09.034308 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e48099f-7475-4195-9700-be5517e73002-utilities\") pod \"certified-operators-x984t\" (UID: \"1e48099f-7475-4195-9700-be5517e73002\") " pod="openshift-marketplace/certified-operators-x984t" Jan 22 11:03:09 crc kubenswrapper[4752]: I0122 11:03:09.034376 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e48099f-7475-4195-9700-be5517e73002-catalog-content\") pod \"certified-operators-x984t\" (UID: \"1e48099f-7475-4195-9700-be5517e73002\") " pod="openshift-marketplace/certified-operators-x984t" Jan 22 11:03:09 crc kubenswrapper[4752]: I0122 11:03:09.034403 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2qd9\" (UniqueName: \"kubernetes.io/projected/1e48099f-7475-4195-9700-be5517e73002-kube-api-access-r2qd9\") pod \"certified-operators-x984t\" (UID: \"1e48099f-7475-4195-9700-be5517e73002\") " pod="openshift-marketplace/certified-operators-x984t" Jan 22 11:03:09 crc kubenswrapper[4752]: I0122 11:03:09.136882 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e48099f-7475-4195-9700-be5517e73002-catalog-content\") pod \"certified-operators-x984t\" (UID: \"1e48099f-7475-4195-9700-be5517e73002\") " pod="openshift-marketplace/certified-operators-x984t" Jan 22 11:03:09 crc kubenswrapper[4752]: I0122 11:03:09.136934 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2qd9\" (UniqueName: \"kubernetes.io/projected/1e48099f-7475-4195-9700-be5517e73002-kube-api-access-r2qd9\") pod \"certified-operators-x984t\" (UID: \"1e48099f-7475-4195-9700-be5517e73002\") " pod="openshift-marketplace/certified-operators-x984t" Jan 22 11:03:09 crc kubenswrapper[4752]: I0122 11:03:09.137166 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e48099f-7475-4195-9700-be5517e73002-utilities\") pod \"certified-operators-x984t\" (UID: \"1e48099f-7475-4195-9700-be5517e73002\") " pod="openshift-marketplace/certified-operators-x984t" Jan 22 11:03:09 crc kubenswrapper[4752]: I0122 11:03:09.137734 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e48099f-7475-4195-9700-be5517e73002-utilities\") pod \"certified-operators-x984t\" (UID: \"1e48099f-7475-4195-9700-be5517e73002\") " pod="openshift-marketplace/certified-operators-x984t" Jan 22 11:03:09 crc kubenswrapper[4752]: I0122 11:03:09.137828 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e48099f-7475-4195-9700-be5517e73002-catalog-content\") pod \"certified-operators-x984t\" (UID: \"1e48099f-7475-4195-9700-be5517e73002\") " pod="openshift-marketplace/certified-operators-x984t" Jan 22 11:03:09 crc kubenswrapper[4752]: I0122 11:03:09.162186 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2qd9\" (UniqueName: \"kubernetes.io/projected/1e48099f-7475-4195-9700-be5517e73002-kube-api-access-r2qd9\") pod \"certified-operators-x984t\" (UID: \"1e48099f-7475-4195-9700-be5517e73002\") " pod="openshift-marketplace/certified-operators-x984t" Jan 22 11:03:09 crc kubenswrapper[4752]: I0122 11:03:09.251681 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x984t" Jan 22 11:03:09 crc kubenswrapper[4752]: I0122 11:03:09.812745 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x984t"] Jan 22 11:03:09 crc kubenswrapper[4752]: W0122 11:03:09.813492 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e48099f_7475_4195_9700_be5517e73002.slice/crio-dab241b8b77d23dbed9ba90d2f34dcdd2c5721c9b95ecf80c81d03f5bdbd3d64 WatchSource:0}: Error finding container dab241b8b77d23dbed9ba90d2f34dcdd2c5721c9b95ecf80c81d03f5bdbd3d64: Status 404 returned error can't find the container with id dab241b8b77d23dbed9ba90d2f34dcdd2c5721c9b95ecf80c81d03f5bdbd3d64 Jan 22 11:03:10 crc kubenswrapper[4752]: I0122 11:03:10.052759 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x984t" event={"ID":"1e48099f-7475-4195-9700-be5517e73002","Type":"ContainerStarted","Data":"dab241b8b77d23dbed9ba90d2f34dcdd2c5721c9b95ecf80c81d03f5bdbd3d64"} Jan 22 11:03:11 crc kubenswrapper[4752]: I0122 11:03:11.063901 4752 generic.go:334] "Generic (PLEG): container finished" podID="1e48099f-7475-4195-9700-be5517e73002" containerID="b55ba19865273258a9d5775ce1d118ef137c1c7ece40bd573bd0ea5455cab0bc" exitCode=0 Jan 22 11:03:11 crc kubenswrapper[4752]: I0122 11:03:11.063975 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x984t" event={"ID":"1e48099f-7475-4195-9700-be5517e73002","Type":"ContainerDied","Data":"b55ba19865273258a9d5775ce1d118ef137c1c7ece40bd573bd0ea5455cab0bc"} Jan 22 11:03:12 crc kubenswrapper[4752]: I0122 11:03:12.896694 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kgwtf" Jan 22 11:03:12 crc kubenswrapper[4752]: I0122 11:03:12.898235 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kgwtf" Jan 22 11:03:12 crc kubenswrapper[4752]: I0122 11:03:12.950014 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kgwtf" Jan 22 11:03:13 crc kubenswrapper[4752]: I0122 11:03:13.084458 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x984t" event={"ID":"1e48099f-7475-4195-9700-be5517e73002","Type":"ContainerStarted","Data":"1e43024508fb7b67135f6533dd796416676bd255a8751e3416a98e5e6caba8b3"} Jan 22 11:03:13 crc kubenswrapper[4752]: I0122 11:03:13.134699 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kgwtf" Jan 22 11:03:14 crc kubenswrapper[4752]: I0122 11:03:14.095599 4752 generic.go:334] "Generic (PLEG): container finished" podID="1e48099f-7475-4195-9700-be5517e73002" containerID="1e43024508fb7b67135f6533dd796416676bd255a8751e3416a98e5e6caba8b3" exitCode=0 Jan 22 11:03:14 crc kubenswrapper[4752]: I0122 11:03:14.095716 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x984t" event={"ID":"1e48099f-7475-4195-9700-be5517e73002","Type":"ContainerDied","Data":"1e43024508fb7b67135f6533dd796416676bd255a8751e3416a98e5e6caba8b3"} Jan 22 11:03:14 crc kubenswrapper[4752]: I0122 11:03:14.920027 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kgwtf"] Jan 22 11:03:15 crc kubenswrapper[4752]: I0122 11:03:15.118361 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kgwtf" podUID="d26af3e3-383d-4881-8d3d-a5e96dc4cb17" containerName="registry-server" containerID="cri-o://00915603cc412b53fe34c5c1cbe059c00a9988aa9e6a6867cd15a14bf078e2bf" gracePeriod=2 Jan 22 11:03:15 crc kubenswrapper[4752]: I0122 11:03:15.119030 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x984t" event={"ID":"1e48099f-7475-4195-9700-be5517e73002","Type":"ContainerStarted","Data":"448298ca9611ddd9f9cc54900a2bfee3f25ecad0e52091ab26796876b06933f0"} Jan 22 11:03:15 crc kubenswrapper[4752]: I0122 11:03:15.161675 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x984t" podStartSLOduration=3.634011478 podStartE2EDuration="7.161660142s" podCreationTimestamp="2026-01-22 11:03:08 +0000 UTC" firstStartedPulling="2026-01-22 11:03:11.068173961 +0000 UTC m=+2270.298116869" lastFinishedPulling="2026-01-22 11:03:14.595822615 +0000 UTC m=+2273.825765533" observedRunningTime="2026-01-22 11:03:15.155327955 +0000 UTC m=+2274.385270863" watchObservedRunningTime="2026-01-22 11:03:15.161660142 +0000 UTC m=+2274.391603040" Jan 22 11:03:15 crc kubenswrapper[4752]: I0122 11:03:15.708966 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kgwtf" Jan 22 11:03:15 crc kubenswrapper[4752]: I0122 11:03:15.872695 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d26af3e3-383d-4881-8d3d-a5e96dc4cb17-utilities\") pod \"d26af3e3-383d-4881-8d3d-a5e96dc4cb17\" (UID: \"d26af3e3-383d-4881-8d3d-a5e96dc4cb17\") " Jan 22 11:03:15 crc kubenswrapper[4752]: I0122 11:03:15.872758 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48gkd\" (UniqueName: \"kubernetes.io/projected/d26af3e3-383d-4881-8d3d-a5e96dc4cb17-kube-api-access-48gkd\") pod \"d26af3e3-383d-4881-8d3d-a5e96dc4cb17\" (UID: \"d26af3e3-383d-4881-8d3d-a5e96dc4cb17\") " Jan 22 11:03:15 crc kubenswrapper[4752]: I0122 11:03:15.872897 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d26af3e3-383d-4881-8d3d-a5e96dc4cb17-catalog-content\") pod \"d26af3e3-383d-4881-8d3d-a5e96dc4cb17\" (UID: \"d26af3e3-383d-4881-8d3d-a5e96dc4cb17\") " Jan 22 11:03:15 crc kubenswrapper[4752]: I0122 11:03:15.873895 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d26af3e3-383d-4881-8d3d-a5e96dc4cb17-utilities" (OuterVolumeSpecName: "utilities") pod "d26af3e3-383d-4881-8d3d-a5e96dc4cb17" (UID: "d26af3e3-383d-4881-8d3d-a5e96dc4cb17"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:03:15 crc kubenswrapper[4752]: I0122 11:03:15.878135 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d26af3e3-383d-4881-8d3d-a5e96dc4cb17-kube-api-access-48gkd" (OuterVolumeSpecName: "kube-api-access-48gkd") pod "d26af3e3-383d-4881-8d3d-a5e96dc4cb17" (UID: "d26af3e3-383d-4881-8d3d-a5e96dc4cb17"). InnerVolumeSpecName "kube-api-access-48gkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:03:15 crc kubenswrapper[4752]: I0122 11:03:15.894162 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d26af3e3-383d-4881-8d3d-a5e96dc4cb17-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d26af3e3-383d-4881-8d3d-a5e96dc4cb17" (UID: "d26af3e3-383d-4881-8d3d-a5e96dc4cb17"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:03:15 crc kubenswrapper[4752]: I0122 11:03:15.975466 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d26af3e3-383d-4881-8d3d-a5e96dc4cb17-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:03:15 crc kubenswrapper[4752]: I0122 11:03:15.975510 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48gkd\" (UniqueName: \"kubernetes.io/projected/d26af3e3-383d-4881-8d3d-a5e96dc4cb17-kube-api-access-48gkd\") on node \"crc\" DevicePath \"\"" Jan 22 11:03:15 crc kubenswrapper[4752]: I0122 11:03:15.975524 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d26af3e3-383d-4881-8d3d-a5e96dc4cb17-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:03:16 crc kubenswrapper[4752]: I0122 11:03:16.130824 4752 generic.go:334] "Generic (PLEG): container finished" podID="d26af3e3-383d-4881-8d3d-a5e96dc4cb17" containerID="00915603cc412b53fe34c5c1cbe059c00a9988aa9e6a6867cd15a14bf078e2bf" exitCode=0 Jan 22 11:03:16 crc kubenswrapper[4752]: I0122 11:03:16.130912 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kgwtf" Jan 22 11:03:16 crc kubenswrapper[4752]: I0122 11:03:16.130983 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kgwtf" event={"ID":"d26af3e3-383d-4881-8d3d-a5e96dc4cb17","Type":"ContainerDied","Data":"00915603cc412b53fe34c5c1cbe059c00a9988aa9e6a6867cd15a14bf078e2bf"} Jan 22 11:03:16 crc kubenswrapper[4752]: I0122 11:03:16.131051 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kgwtf" event={"ID":"d26af3e3-383d-4881-8d3d-a5e96dc4cb17","Type":"ContainerDied","Data":"ed3527ce7e946bbb3690caf07a2c5ab9ce62bc1aeb9dce3d506a6f0792a45ae0"} Jan 22 11:03:16 crc kubenswrapper[4752]: I0122 11:03:16.131080 4752 scope.go:117] "RemoveContainer" containerID="00915603cc412b53fe34c5c1cbe059c00a9988aa9e6a6867cd15a14bf078e2bf" Jan 22 11:03:16 crc kubenswrapper[4752]: I0122 11:03:16.162577 4752 scope.go:117] "RemoveContainer" containerID="ede39a2f2a4a8594ffe5cc0046bf0c924d462a6a14565b798b6beadc08c530fe" Jan 22 11:03:16 crc kubenswrapper[4752]: I0122 11:03:16.171639 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kgwtf"] Jan 22 11:03:16 crc kubenswrapper[4752]: I0122 11:03:16.182806 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kgwtf"] Jan 22 11:03:16 crc kubenswrapper[4752]: I0122 11:03:16.192017 4752 scope.go:117] "RemoveContainer" containerID="c0a6338d242176189ad96e1a9d8edb0281f424fd315c5b6270dba8fcafa00b53" Jan 22 11:03:16 crc kubenswrapper[4752]: I0122 11:03:16.238255 4752 scope.go:117] "RemoveContainer" containerID="00915603cc412b53fe34c5c1cbe059c00a9988aa9e6a6867cd15a14bf078e2bf" Jan 22 11:03:16 crc kubenswrapper[4752]: E0122 11:03:16.242962 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00915603cc412b53fe34c5c1cbe059c00a9988aa9e6a6867cd15a14bf078e2bf\": container with ID starting with 00915603cc412b53fe34c5c1cbe059c00a9988aa9e6a6867cd15a14bf078e2bf not found: ID does not exist" containerID="00915603cc412b53fe34c5c1cbe059c00a9988aa9e6a6867cd15a14bf078e2bf" Jan 22 11:03:16 crc kubenswrapper[4752]: I0122 11:03:16.242996 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00915603cc412b53fe34c5c1cbe059c00a9988aa9e6a6867cd15a14bf078e2bf"} err="failed to get container status \"00915603cc412b53fe34c5c1cbe059c00a9988aa9e6a6867cd15a14bf078e2bf\": rpc error: code = NotFound desc = could not find container \"00915603cc412b53fe34c5c1cbe059c00a9988aa9e6a6867cd15a14bf078e2bf\": container with ID starting with 00915603cc412b53fe34c5c1cbe059c00a9988aa9e6a6867cd15a14bf078e2bf not found: ID does not exist" Jan 22 11:03:16 crc kubenswrapper[4752]: I0122 11:03:16.243019 4752 scope.go:117] "RemoveContainer" containerID="ede39a2f2a4a8594ffe5cc0046bf0c924d462a6a14565b798b6beadc08c530fe" Jan 22 11:03:16 crc kubenswrapper[4752]: E0122 11:03:16.243751 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ede39a2f2a4a8594ffe5cc0046bf0c924d462a6a14565b798b6beadc08c530fe\": container with ID starting with ede39a2f2a4a8594ffe5cc0046bf0c924d462a6a14565b798b6beadc08c530fe not found: ID does not exist" containerID="ede39a2f2a4a8594ffe5cc0046bf0c924d462a6a14565b798b6beadc08c530fe" Jan 22 11:03:16 crc kubenswrapper[4752]: I0122 11:03:16.243793 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ede39a2f2a4a8594ffe5cc0046bf0c924d462a6a14565b798b6beadc08c530fe"} err="failed to get container status \"ede39a2f2a4a8594ffe5cc0046bf0c924d462a6a14565b798b6beadc08c530fe\": rpc error: code = NotFound desc = could not find container \"ede39a2f2a4a8594ffe5cc0046bf0c924d462a6a14565b798b6beadc08c530fe\": container with ID starting with ede39a2f2a4a8594ffe5cc0046bf0c924d462a6a14565b798b6beadc08c530fe not found: ID does not exist" Jan 22 11:03:16 crc kubenswrapper[4752]: I0122 11:03:16.243820 4752 scope.go:117] "RemoveContainer" containerID="c0a6338d242176189ad96e1a9d8edb0281f424fd315c5b6270dba8fcafa00b53" Jan 22 11:03:16 crc kubenswrapper[4752]: E0122 11:03:16.244224 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0a6338d242176189ad96e1a9d8edb0281f424fd315c5b6270dba8fcafa00b53\": container with ID starting with c0a6338d242176189ad96e1a9d8edb0281f424fd315c5b6270dba8fcafa00b53 not found: ID does not exist" containerID="c0a6338d242176189ad96e1a9d8edb0281f424fd315c5b6270dba8fcafa00b53" Jan 22 11:03:16 crc kubenswrapper[4752]: I0122 11:03:16.244291 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0a6338d242176189ad96e1a9d8edb0281f424fd315c5b6270dba8fcafa00b53"} err="failed to get container status \"c0a6338d242176189ad96e1a9d8edb0281f424fd315c5b6270dba8fcafa00b53\": rpc error: code = NotFound desc = could not find container \"c0a6338d242176189ad96e1a9d8edb0281f424fd315c5b6270dba8fcafa00b53\": container with ID starting with c0a6338d242176189ad96e1a9d8edb0281f424fd315c5b6270dba8fcafa00b53 not found: ID does not exist" Jan 22 11:03:17 crc kubenswrapper[4752]: I0122 11:03:17.113577 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d26af3e3-383d-4881-8d3d-a5e96dc4cb17" path="/var/lib/kubelet/pods/d26af3e3-383d-4881-8d3d-a5e96dc4cb17/volumes" Jan 22 11:03:19 crc kubenswrapper[4752]: I0122 11:03:19.252546 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x984t" Jan 22 11:03:19 crc kubenswrapper[4752]: I0122 11:03:19.252948 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x984t" Jan 22 11:03:19 crc kubenswrapper[4752]: I0122 11:03:19.320469 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x984t" Jan 22 11:03:20 crc kubenswrapper[4752]: I0122 11:03:20.331005 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x984t" Jan 22 11:03:23 crc kubenswrapper[4752]: I0122 11:03:23.512647 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x984t"] Jan 22 11:03:23 crc kubenswrapper[4752]: I0122 11:03:23.513222 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x984t" podUID="1e48099f-7475-4195-9700-be5517e73002" containerName="registry-server" containerID="cri-o://448298ca9611ddd9f9cc54900a2bfee3f25ecad0e52091ab26796876b06933f0" gracePeriod=2 Jan 22 11:03:24 crc kubenswrapper[4752]: I0122 11:03:24.174792 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x984t" Jan 22 11:03:24 crc kubenswrapper[4752]: I0122 11:03:24.261677 4752 generic.go:334] "Generic (PLEG): container finished" podID="1e48099f-7475-4195-9700-be5517e73002" containerID="448298ca9611ddd9f9cc54900a2bfee3f25ecad0e52091ab26796876b06933f0" exitCode=0 Jan 22 11:03:24 crc kubenswrapper[4752]: I0122 11:03:24.261722 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x984t" event={"ID":"1e48099f-7475-4195-9700-be5517e73002","Type":"ContainerDied","Data":"448298ca9611ddd9f9cc54900a2bfee3f25ecad0e52091ab26796876b06933f0"} Jan 22 11:03:24 crc kubenswrapper[4752]: I0122 11:03:24.261756 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x984t" event={"ID":"1e48099f-7475-4195-9700-be5517e73002","Type":"ContainerDied","Data":"dab241b8b77d23dbed9ba90d2f34dcdd2c5721c9b95ecf80c81d03f5bdbd3d64"} Jan 22 11:03:24 crc kubenswrapper[4752]: I0122 11:03:24.261776 4752 scope.go:117] "RemoveContainer" containerID="448298ca9611ddd9f9cc54900a2bfee3f25ecad0e52091ab26796876b06933f0" Jan 22 11:03:24 crc kubenswrapper[4752]: I0122 11:03:24.261835 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x984t" Jan 22 11:03:24 crc kubenswrapper[4752]: I0122 11:03:24.292901 4752 scope.go:117] "RemoveContainer" containerID="1e43024508fb7b67135f6533dd796416676bd255a8751e3416a98e5e6caba8b3" Jan 22 11:03:24 crc kubenswrapper[4752]: I0122 11:03:24.322359 4752 scope.go:117] "RemoveContainer" containerID="b55ba19865273258a9d5775ce1d118ef137c1c7ece40bd573bd0ea5455cab0bc" Jan 22 11:03:24 crc kubenswrapper[4752]: I0122 11:03:24.361006 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e48099f-7475-4195-9700-be5517e73002-catalog-content\") pod \"1e48099f-7475-4195-9700-be5517e73002\" (UID: \"1e48099f-7475-4195-9700-be5517e73002\") " Jan 22 11:03:24 crc kubenswrapper[4752]: I0122 11:03:24.361336 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2qd9\" (UniqueName: \"kubernetes.io/projected/1e48099f-7475-4195-9700-be5517e73002-kube-api-access-r2qd9\") pod \"1e48099f-7475-4195-9700-be5517e73002\" (UID: \"1e48099f-7475-4195-9700-be5517e73002\") " Jan 22 11:03:24 crc kubenswrapper[4752]: I0122 11:03:24.361471 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e48099f-7475-4195-9700-be5517e73002-utilities\") pod \"1e48099f-7475-4195-9700-be5517e73002\" (UID: \"1e48099f-7475-4195-9700-be5517e73002\") " Jan 22 11:03:24 crc kubenswrapper[4752]: I0122 11:03:24.362111 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e48099f-7475-4195-9700-be5517e73002-utilities" (OuterVolumeSpecName: "utilities") pod "1e48099f-7475-4195-9700-be5517e73002" (UID: "1e48099f-7475-4195-9700-be5517e73002"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:03:24 crc kubenswrapper[4752]: I0122 11:03:24.373630 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e48099f-7475-4195-9700-be5517e73002-kube-api-access-r2qd9" (OuterVolumeSpecName: "kube-api-access-r2qd9") pod "1e48099f-7475-4195-9700-be5517e73002" (UID: "1e48099f-7475-4195-9700-be5517e73002"). InnerVolumeSpecName "kube-api-access-r2qd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:03:24 crc kubenswrapper[4752]: I0122 11:03:24.381703 4752 scope.go:117] "RemoveContainer" containerID="448298ca9611ddd9f9cc54900a2bfee3f25ecad0e52091ab26796876b06933f0" Jan 22 11:03:24 crc kubenswrapper[4752]: E0122 11:03:24.382264 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"448298ca9611ddd9f9cc54900a2bfee3f25ecad0e52091ab26796876b06933f0\": container with ID starting with 448298ca9611ddd9f9cc54900a2bfee3f25ecad0e52091ab26796876b06933f0 not found: ID does not exist" containerID="448298ca9611ddd9f9cc54900a2bfee3f25ecad0e52091ab26796876b06933f0" Jan 22 11:03:24 crc kubenswrapper[4752]: I0122 11:03:24.382295 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"448298ca9611ddd9f9cc54900a2bfee3f25ecad0e52091ab26796876b06933f0"} err="failed to get container status \"448298ca9611ddd9f9cc54900a2bfee3f25ecad0e52091ab26796876b06933f0\": rpc error: code = NotFound desc = could not find container \"448298ca9611ddd9f9cc54900a2bfee3f25ecad0e52091ab26796876b06933f0\": container with ID starting with 448298ca9611ddd9f9cc54900a2bfee3f25ecad0e52091ab26796876b06933f0 not found: ID does not exist" Jan 22 11:03:24 crc kubenswrapper[4752]: I0122 11:03:24.382315 4752 scope.go:117] "RemoveContainer" containerID="1e43024508fb7b67135f6533dd796416676bd255a8751e3416a98e5e6caba8b3" Jan 22 11:03:24 crc kubenswrapper[4752]: E0122 11:03:24.382834 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e43024508fb7b67135f6533dd796416676bd255a8751e3416a98e5e6caba8b3\": container with ID starting with 1e43024508fb7b67135f6533dd796416676bd255a8751e3416a98e5e6caba8b3 not found: ID does not exist" containerID="1e43024508fb7b67135f6533dd796416676bd255a8751e3416a98e5e6caba8b3" Jan 22 11:03:24 crc kubenswrapper[4752]: I0122 11:03:24.382892 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e43024508fb7b67135f6533dd796416676bd255a8751e3416a98e5e6caba8b3"} err="failed to get container status \"1e43024508fb7b67135f6533dd796416676bd255a8751e3416a98e5e6caba8b3\": rpc error: code = NotFound desc = could not find container \"1e43024508fb7b67135f6533dd796416676bd255a8751e3416a98e5e6caba8b3\": container with ID starting with 1e43024508fb7b67135f6533dd796416676bd255a8751e3416a98e5e6caba8b3 not found: ID does not exist" Jan 22 11:03:24 crc kubenswrapper[4752]: I0122 11:03:24.382919 4752 scope.go:117] "RemoveContainer" containerID="b55ba19865273258a9d5775ce1d118ef137c1c7ece40bd573bd0ea5455cab0bc" Jan 22 11:03:24 crc kubenswrapper[4752]: E0122 11:03:24.383202 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b55ba19865273258a9d5775ce1d118ef137c1c7ece40bd573bd0ea5455cab0bc\": container with ID starting with b55ba19865273258a9d5775ce1d118ef137c1c7ece40bd573bd0ea5455cab0bc not found: ID does not exist" containerID="b55ba19865273258a9d5775ce1d118ef137c1c7ece40bd573bd0ea5455cab0bc" Jan 22 11:03:24 crc kubenswrapper[4752]: I0122 11:03:24.383222 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b55ba19865273258a9d5775ce1d118ef137c1c7ece40bd573bd0ea5455cab0bc"} err="failed to get container status \"b55ba19865273258a9d5775ce1d118ef137c1c7ece40bd573bd0ea5455cab0bc\": rpc error: code = NotFound desc = could not find container \"b55ba19865273258a9d5775ce1d118ef137c1c7ece40bd573bd0ea5455cab0bc\": container with ID starting with b55ba19865273258a9d5775ce1d118ef137c1c7ece40bd573bd0ea5455cab0bc not found: ID does not exist" Jan 22 11:03:24 crc kubenswrapper[4752]: I0122 11:03:24.413325 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e48099f-7475-4195-9700-be5517e73002-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e48099f-7475-4195-9700-be5517e73002" (UID: "1e48099f-7475-4195-9700-be5517e73002"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:03:24 crc kubenswrapper[4752]: I0122 11:03:24.464817 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e48099f-7475-4195-9700-be5517e73002-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:03:24 crc kubenswrapper[4752]: I0122 11:03:24.464878 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2qd9\" (UniqueName: \"kubernetes.io/projected/1e48099f-7475-4195-9700-be5517e73002-kube-api-access-r2qd9\") on node \"crc\" DevicePath \"\"" Jan 22 11:03:24 crc kubenswrapper[4752]: I0122 11:03:24.464895 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e48099f-7475-4195-9700-be5517e73002-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:03:24 crc kubenswrapper[4752]: I0122 11:03:24.602794 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x984t"] Jan 22 11:03:24 crc kubenswrapper[4752]: I0122 11:03:24.613972 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x984t"] Jan 22 11:03:25 crc kubenswrapper[4752]: I0122 11:03:25.110412 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e48099f-7475-4195-9700-be5517e73002" path="/var/lib/kubelet/pods/1e48099f-7475-4195-9700-be5517e73002/volumes" Jan 22 11:04:27 crc kubenswrapper[4752]: I0122 11:04:27.724484 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:04:27 crc kubenswrapper[4752]: I0122 11:04:27.725127 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:04:57 crc kubenswrapper[4752]: I0122 11:04:57.723650 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:04:57 crc kubenswrapper[4752]: I0122 11:04:57.724909 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:05:27 crc kubenswrapper[4752]: I0122 11:05:27.723294 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:05:27 crc kubenswrapper[4752]: I0122 11:05:27.724038 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:05:27 crc kubenswrapper[4752]: I0122 11:05:27.724104 4752 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 11:05:27 crc kubenswrapper[4752]: I0122 11:05:27.725183 4752 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d8d4a0367b441aa318ea13a556d1d9c716143e5abe9c51339a1b6127faa71c61"} pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 11:05:27 crc kubenswrapper[4752]: I0122 11:05:27.725282 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" containerID="cri-o://d8d4a0367b441aa318ea13a556d1d9c716143e5abe9c51339a1b6127faa71c61" gracePeriod=600 Jan 22 11:05:28 crc kubenswrapper[4752]: E0122 11:05:28.054560 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:05:28 crc kubenswrapper[4752]: I0122 11:05:28.525939 4752 generic.go:334] "Generic (PLEG): container finished" podID="eb8df70c-9474-4827-8831-f39fc6883d79" containerID="d8d4a0367b441aa318ea13a556d1d9c716143e5abe9c51339a1b6127faa71c61" exitCode=0 Jan 22 11:05:28 crc kubenswrapper[4752]: I0122 11:05:28.525992 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerDied","Data":"d8d4a0367b441aa318ea13a556d1d9c716143e5abe9c51339a1b6127faa71c61"} Jan 22 11:05:28 crc kubenswrapper[4752]: I0122 11:05:28.526038 4752 scope.go:117] "RemoveContainer" containerID="70cbafe9440260ad6827dda2f3848d07643b15d146fa166af3af676a25a18d68" Jan 22 11:05:28 crc kubenswrapper[4752]: I0122 11:05:28.527057 4752 scope.go:117] "RemoveContainer" containerID="d8d4a0367b441aa318ea13a556d1d9c716143e5abe9c51339a1b6127faa71c61" Jan 22 11:05:28 crc kubenswrapper[4752]: E0122 11:05:28.527655 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:05:40 crc kubenswrapper[4752]: I0122 11:05:40.098225 4752 scope.go:117] "RemoveContainer" containerID="d8d4a0367b441aa318ea13a556d1d9c716143e5abe9c51339a1b6127faa71c61" Jan 22 11:05:40 crc kubenswrapper[4752]: E0122 11:05:40.099193 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:05:51 crc kubenswrapper[4752]: I0122 11:05:51.106624 4752 scope.go:117] "RemoveContainer" containerID="d8d4a0367b441aa318ea13a556d1d9c716143e5abe9c51339a1b6127faa71c61" Jan 22 11:05:51 crc kubenswrapper[4752]: E0122 11:05:51.107654 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:06:04 crc kubenswrapper[4752]: I0122 11:06:04.097659 4752 scope.go:117] "RemoveContainer" containerID="d8d4a0367b441aa318ea13a556d1d9c716143e5abe9c51339a1b6127faa71c61" Jan 22 11:06:04 crc kubenswrapper[4752]: E0122 11:06:04.098481 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:06:17 crc kubenswrapper[4752]: I0122 11:06:17.098885 4752 scope.go:117] "RemoveContainer" containerID="d8d4a0367b441aa318ea13a556d1d9c716143e5abe9c51339a1b6127faa71c61" Jan 22 11:06:17 crc kubenswrapper[4752]: E0122 11:06:17.101328 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:06:29 crc kubenswrapper[4752]: I0122 11:06:29.100104 4752 scope.go:117] "RemoveContainer" containerID="d8d4a0367b441aa318ea13a556d1d9c716143e5abe9c51339a1b6127faa71c61" Jan 22 11:06:29 crc kubenswrapper[4752]: E0122 11:06:29.100918 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:06:41 crc kubenswrapper[4752]: I0122 11:06:41.106181 4752 scope.go:117] "RemoveContainer" containerID="d8d4a0367b441aa318ea13a556d1d9c716143e5abe9c51339a1b6127faa71c61" Jan 22 11:06:41 crc kubenswrapper[4752]: E0122 11:06:41.108663 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:06:54 crc kubenswrapper[4752]: I0122 11:06:54.098297 4752 scope.go:117] "RemoveContainer" containerID="d8d4a0367b441aa318ea13a556d1d9c716143e5abe9c51339a1b6127faa71c61" Jan 22 11:06:54 crc kubenswrapper[4752]: E0122 11:06:54.099252 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:06:56 crc kubenswrapper[4752]: I0122 11:06:56.462576 4752 generic.go:334] "Generic (PLEG): container finished" podID="68346080-e43b-4ba0-8ddb-b15da9e7e5bb" containerID="4eae733ca6e3a44f3be5712ece616e3f9d0deaba39a01b222f37cbf9236c19b7" exitCode=0 Jan 22 11:06:56 crc kubenswrapper[4752]: I0122 11:06:56.462848 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq" event={"ID":"68346080-e43b-4ba0-8ddb-b15da9e7e5bb","Type":"ContainerDied","Data":"4eae733ca6e3a44f3be5712ece616e3f9d0deaba39a01b222f37cbf9236c19b7"} Jan 22 11:06:57 crc kubenswrapper[4752]: I0122 11:06:57.919534 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.045484 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68346080-e43b-4ba0-8ddb-b15da9e7e5bb-libvirt-combined-ca-bundle\") pod \"68346080-e43b-4ba0-8ddb-b15da9e7e5bb\" (UID: \"68346080-e43b-4ba0-8ddb-b15da9e7e5bb\") " Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.045696 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68346080-e43b-4ba0-8ddb-b15da9e7e5bb-inventory\") pod \"68346080-e43b-4ba0-8ddb-b15da9e7e5bb\" (UID: \"68346080-e43b-4ba0-8ddb-b15da9e7e5bb\") " Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.045747 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/68346080-e43b-4ba0-8ddb-b15da9e7e5bb-libvirt-secret-0\") pod \"68346080-e43b-4ba0-8ddb-b15da9e7e5bb\" (UID: \"68346080-e43b-4ba0-8ddb-b15da9e7e5bb\") " Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.045780 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7ks5\" (UniqueName: \"kubernetes.io/projected/68346080-e43b-4ba0-8ddb-b15da9e7e5bb-kube-api-access-d7ks5\") pod \"68346080-e43b-4ba0-8ddb-b15da9e7e5bb\" (UID: \"68346080-e43b-4ba0-8ddb-b15da9e7e5bb\") " Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.046007 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68346080-e43b-4ba0-8ddb-b15da9e7e5bb-ssh-key-openstack-edpm-ipam\") pod \"68346080-e43b-4ba0-8ddb-b15da9e7e5bb\" (UID: \"68346080-e43b-4ba0-8ddb-b15da9e7e5bb\") " Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.053259 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68346080-e43b-4ba0-8ddb-b15da9e7e5bb-kube-api-access-d7ks5" (OuterVolumeSpecName: "kube-api-access-d7ks5") pod "68346080-e43b-4ba0-8ddb-b15da9e7e5bb" (UID: "68346080-e43b-4ba0-8ddb-b15da9e7e5bb"). InnerVolumeSpecName "kube-api-access-d7ks5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.060098 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68346080-e43b-4ba0-8ddb-b15da9e7e5bb-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "68346080-e43b-4ba0-8ddb-b15da9e7e5bb" (UID: "68346080-e43b-4ba0-8ddb-b15da9e7e5bb"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.090531 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68346080-e43b-4ba0-8ddb-b15da9e7e5bb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "68346080-e43b-4ba0-8ddb-b15da9e7e5bb" (UID: "68346080-e43b-4ba0-8ddb-b15da9e7e5bb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.092505 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68346080-e43b-4ba0-8ddb-b15da9e7e5bb-inventory" (OuterVolumeSpecName: "inventory") pod "68346080-e43b-4ba0-8ddb-b15da9e7e5bb" (UID: "68346080-e43b-4ba0-8ddb-b15da9e7e5bb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.096789 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68346080-e43b-4ba0-8ddb-b15da9e7e5bb-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "68346080-e43b-4ba0-8ddb-b15da9e7e5bb" (UID: "68346080-e43b-4ba0-8ddb-b15da9e7e5bb"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.148403 4752 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68346080-e43b-4ba0-8ddb-b15da9e7e5bb-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.148440 4752 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68346080-e43b-4ba0-8ddb-b15da9e7e5bb-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.148454 4752 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/68346080-e43b-4ba0-8ddb-b15da9e7e5bb-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.148462 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7ks5\" (UniqueName: \"kubernetes.io/projected/68346080-e43b-4ba0-8ddb-b15da9e7e5bb-kube-api-access-d7ks5\") on node \"crc\" DevicePath \"\"" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.148471 4752 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68346080-e43b-4ba0-8ddb-b15da9e7e5bb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.483880 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq" event={"ID":"68346080-e43b-4ba0-8ddb-b15da9e7e5bb","Type":"ContainerDied","Data":"d3f230f901013c87614144613e3510b2ee8fa6c358a8b96ba5916d46f866063d"} Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.484100 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3f230f901013c87614144613e3510b2ee8fa6c358a8b96ba5916d46f866063d" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.483939 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xnxjq" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.579085 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz"] Jan 22 11:06:58 crc kubenswrapper[4752]: E0122 11:06:58.579529 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e48099f-7475-4195-9700-be5517e73002" containerName="extract-utilities" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.579551 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e48099f-7475-4195-9700-be5517e73002" containerName="extract-utilities" Jan 22 11:06:58 crc kubenswrapper[4752]: E0122 11:06:58.579581 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e48099f-7475-4195-9700-be5517e73002" containerName="registry-server" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.579590 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e48099f-7475-4195-9700-be5517e73002" containerName="registry-server" Jan 22 11:06:58 crc kubenswrapper[4752]: E0122 11:06:58.579619 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68346080-e43b-4ba0-8ddb-b15da9e7e5bb" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.579630 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="68346080-e43b-4ba0-8ddb-b15da9e7e5bb" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 22 11:06:58 crc kubenswrapper[4752]: E0122 11:06:58.579649 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d26af3e3-383d-4881-8d3d-a5e96dc4cb17" containerName="extract-utilities" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.579657 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="d26af3e3-383d-4881-8d3d-a5e96dc4cb17" containerName="extract-utilities" Jan 22 11:06:58 crc kubenswrapper[4752]: E0122 11:06:58.579675 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d26af3e3-383d-4881-8d3d-a5e96dc4cb17" containerName="registry-server" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.579684 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="d26af3e3-383d-4881-8d3d-a5e96dc4cb17" containerName="registry-server" Jan 22 11:06:58 crc kubenswrapper[4752]: E0122 11:06:58.579701 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e48099f-7475-4195-9700-be5517e73002" containerName="extract-content" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.579709 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e48099f-7475-4195-9700-be5517e73002" containerName="extract-content" Jan 22 11:06:58 crc kubenswrapper[4752]: E0122 11:06:58.579722 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d26af3e3-383d-4881-8d3d-a5e96dc4cb17" containerName="extract-content" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.579730 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="d26af3e3-383d-4881-8d3d-a5e96dc4cb17" containerName="extract-content" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.579981 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e48099f-7475-4195-9700-be5517e73002" containerName="registry-server" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.580023 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="68346080-e43b-4ba0-8ddb-b15da9e7e5bb" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.580033 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="d26af3e3-383d-4881-8d3d-a5e96dc4cb17" containerName="registry-server" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.581011 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.586005 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.586213 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.586399 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.586556 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.586696 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.586900 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.587212 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j8c6q" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.595213 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz"] Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.659784 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nl6wz\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.659972 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nl6wz\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.660021 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nl6wz\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.660067 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmpq8\" (UniqueName: \"kubernetes.io/projected/e9332027-dcd4-40b0-862f-ca03d1d28075-kube-api-access-tmpq8\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nl6wz\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.660096 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nl6wz\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.660118 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nl6wz\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.660197 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nl6wz\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.660218 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nl6wz\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.660236 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nl6wz\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.762381 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nl6wz\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.762454 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nl6wz\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.762479 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nl6wz\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.762511 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmpq8\" (UniqueName: \"kubernetes.io/projected/e9332027-dcd4-40b0-862f-ca03d1d28075-kube-api-access-tmpq8\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nl6wz\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.762537 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nl6wz\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.762556 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nl6wz\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.762637 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nl6wz\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.762659 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nl6wz\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.762677 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nl6wz\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.763955 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nl6wz\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.768413 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nl6wz\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.768983 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nl6wz\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.769784 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nl6wz\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.770767 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nl6wz\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.771754 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nl6wz\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.775456 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nl6wz\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.776550 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nl6wz\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.778743 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmpq8\" (UniqueName: \"kubernetes.io/projected/e9332027-dcd4-40b0-862f-ca03d1d28075-kube-api-access-tmpq8\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nl6wz\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" Jan 22 11:06:58 crc kubenswrapper[4752]: I0122 11:06:58.947790 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" Jan 22 11:06:59 crc kubenswrapper[4752]: I0122 11:06:59.520623 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz"] Jan 22 11:07:00 crc kubenswrapper[4752]: I0122 11:07:00.502378 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" event={"ID":"e9332027-dcd4-40b0-862f-ca03d1d28075","Type":"ContainerStarted","Data":"59377e3b690ede6bcb771bfd68f1107df2faca8dbf61111a77e3fc7bd0823b16"} Jan 22 11:07:00 crc kubenswrapper[4752]: I0122 11:07:00.503041 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" event={"ID":"e9332027-dcd4-40b0-862f-ca03d1d28075","Type":"ContainerStarted","Data":"dd7b491e496eeaae6627b2ced1469fc69ba456b530dc75a280bd04d8e573d399"} Jan 22 11:07:00 crc kubenswrapper[4752]: I0122 11:07:00.536078 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" podStartSLOduration=2.116256677 podStartE2EDuration="2.536056895s" podCreationTimestamp="2026-01-22 11:06:58 +0000 UTC" firstStartedPulling="2026-01-22 11:06:59.517214157 +0000 UTC m=+2498.747157075" lastFinishedPulling="2026-01-22 11:06:59.937014355 +0000 UTC m=+2499.166957293" observedRunningTime="2026-01-22 11:07:00.527958654 +0000 UTC m=+2499.757901572" watchObservedRunningTime="2026-01-22 11:07:00.536056895 +0000 UTC m=+2499.765999813" Jan 22 11:07:06 crc kubenswrapper[4752]: I0122 11:07:06.098539 4752 scope.go:117] "RemoveContainer" containerID="d8d4a0367b441aa318ea13a556d1d9c716143e5abe9c51339a1b6127faa71c61" Jan 22 11:07:06 crc kubenswrapper[4752]: E0122 11:07:06.099648 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:07:19 crc kubenswrapper[4752]: I0122 11:07:19.098792 4752 scope.go:117] "RemoveContainer" containerID="d8d4a0367b441aa318ea13a556d1d9c716143e5abe9c51339a1b6127faa71c61" Jan 22 11:07:19 crc kubenswrapper[4752]: E0122 11:07:19.099636 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:07:33 crc kubenswrapper[4752]: I0122 11:07:33.099064 4752 scope.go:117] "RemoveContainer" containerID="d8d4a0367b441aa318ea13a556d1d9c716143e5abe9c51339a1b6127faa71c61" Jan 22 11:07:33 crc kubenswrapper[4752]: E0122 11:07:33.100527 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:07:44 crc kubenswrapper[4752]: I0122 11:07:44.097991 4752 scope.go:117] "RemoveContainer" containerID="d8d4a0367b441aa318ea13a556d1d9c716143e5abe9c51339a1b6127faa71c61" Jan 22 11:07:44 crc kubenswrapper[4752]: E0122 11:07:44.098847 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:07:58 crc kubenswrapper[4752]: I0122 11:07:58.098610 4752 scope.go:117] "RemoveContainer" containerID="d8d4a0367b441aa318ea13a556d1d9c716143e5abe9c51339a1b6127faa71c61" Jan 22 11:07:58 crc kubenswrapper[4752]: E0122 11:07:58.099295 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:08:10 crc kubenswrapper[4752]: I0122 11:08:10.098048 4752 scope.go:117] "RemoveContainer" containerID="d8d4a0367b441aa318ea13a556d1d9c716143e5abe9c51339a1b6127faa71c61" Jan 22 11:08:10 crc kubenswrapper[4752]: E0122 11:08:10.098961 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:08:22 crc kubenswrapper[4752]: I0122 11:08:22.097712 4752 scope.go:117] "RemoveContainer" containerID="d8d4a0367b441aa318ea13a556d1d9c716143e5abe9c51339a1b6127faa71c61" Jan 22 11:08:22 crc kubenswrapper[4752]: E0122 11:08:22.098754 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:08:33 crc kubenswrapper[4752]: I0122 11:08:33.098630 4752 scope.go:117] "RemoveContainer" containerID="d8d4a0367b441aa318ea13a556d1d9c716143e5abe9c51339a1b6127faa71c61" Jan 22 11:08:33 crc kubenswrapper[4752]: E0122 11:08:33.099494 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:08:44 crc kubenswrapper[4752]: I0122 11:08:44.098558 4752 scope.go:117] "RemoveContainer" containerID="d8d4a0367b441aa318ea13a556d1d9c716143e5abe9c51339a1b6127faa71c61" Jan 22 11:08:44 crc kubenswrapper[4752]: E0122 11:08:44.099289 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:08:58 crc kubenswrapper[4752]: I0122 11:08:58.098702 4752 scope.go:117] "RemoveContainer" containerID="d8d4a0367b441aa318ea13a556d1d9c716143e5abe9c51339a1b6127faa71c61" Jan 22 11:08:58 crc kubenswrapper[4752]: E0122 11:08:58.099790 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:09:09 crc kubenswrapper[4752]: I0122 11:09:09.099837 4752 scope.go:117] "RemoveContainer" containerID="d8d4a0367b441aa318ea13a556d1d9c716143e5abe9c51339a1b6127faa71c61" Jan 22 11:09:09 crc kubenswrapper[4752]: E0122 11:09:09.101291 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:09:23 crc kubenswrapper[4752]: I0122 11:09:23.239381 4752 scope.go:117] "RemoveContainer" containerID="d8d4a0367b441aa318ea13a556d1d9c716143e5abe9c51339a1b6127faa71c61" Jan 22 11:09:23 crc kubenswrapper[4752]: E0122 11:09:23.240229 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:09:36 crc kubenswrapper[4752]: I0122 11:09:36.099314 4752 scope.go:117] "RemoveContainer" containerID="d8d4a0367b441aa318ea13a556d1d9c716143e5abe9c51339a1b6127faa71c61" Jan 22 11:09:36 crc kubenswrapper[4752]: E0122 11:09:36.100400 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:09:38 crc kubenswrapper[4752]: I0122 11:09:38.147842 4752 generic.go:334] "Generic (PLEG): container finished" podID="e9332027-dcd4-40b0-862f-ca03d1d28075" containerID="59377e3b690ede6bcb771bfd68f1107df2faca8dbf61111a77e3fc7bd0823b16" exitCode=0 Jan 22 11:09:38 crc kubenswrapper[4752]: I0122 11:09:38.148057 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" event={"ID":"e9332027-dcd4-40b0-862f-ca03d1d28075","Type":"ContainerDied","Data":"59377e3b690ede6bcb771bfd68f1107df2faca8dbf61111a77e3fc7bd0823b16"} Jan 22 11:09:39 crc kubenswrapper[4752]: I0122 11:09:39.657617 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" Jan 22 11:09:39 crc kubenswrapper[4752]: I0122 11:09:39.708359 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-extra-config-0\") pod \"e9332027-dcd4-40b0-862f-ca03d1d28075\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " Jan 22 11:09:39 crc kubenswrapper[4752]: I0122 11:09:39.708502 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-combined-ca-bundle\") pod \"e9332027-dcd4-40b0-862f-ca03d1d28075\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " Jan 22 11:09:39 crc kubenswrapper[4752]: I0122 11:09:39.708528 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-migration-ssh-key-1\") pod \"e9332027-dcd4-40b0-862f-ca03d1d28075\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " Jan 22 11:09:39 crc kubenswrapper[4752]: I0122 11:09:39.708579 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-inventory\") pod \"e9332027-dcd4-40b0-862f-ca03d1d28075\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " Jan 22 11:09:39 crc kubenswrapper[4752]: I0122 11:09:39.709415 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-cell1-compute-config-1\") pod \"e9332027-dcd4-40b0-862f-ca03d1d28075\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " Jan 22 11:09:39 crc kubenswrapper[4752]: I0122 11:09:39.709451 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-ssh-key-openstack-edpm-ipam\") pod \"e9332027-dcd4-40b0-862f-ca03d1d28075\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " Jan 22 11:09:39 crc kubenswrapper[4752]: I0122 11:09:39.709479 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-migration-ssh-key-0\") pod \"e9332027-dcd4-40b0-862f-ca03d1d28075\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " Jan 22 11:09:39 crc kubenswrapper[4752]: I0122 11:09:39.709513 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmpq8\" (UniqueName: \"kubernetes.io/projected/e9332027-dcd4-40b0-862f-ca03d1d28075-kube-api-access-tmpq8\") pod \"e9332027-dcd4-40b0-862f-ca03d1d28075\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " Jan 22 11:09:39 crc kubenswrapper[4752]: I0122 11:09:39.709563 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-cell1-compute-config-0\") pod \"e9332027-dcd4-40b0-862f-ca03d1d28075\" (UID: \"e9332027-dcd4-40b0-862f-ca03d1d28075\") " Jan 22 11:09:39 crc kubenswrapper[4752]: I0122 11:09:39.717588 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9332027-dcd4-40b0-862f-ca03d1d28075-kube-api-access-tmpq8" (OuterVolumeSpecName: "kube-api-access-tmpq8") pod "e9332027-dcd4-40b0-862f-ca03d1d28075" (UID: "e9332027-dcd4-40b0-862f-ca03d1d28075"). InnerVolumeSpecName "kube-api-access-tmpq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:09:39 crc kubenswrapper[4752]: I0122 11:09:39.731650 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "e9332027-dcd4-40b0-862f-ca03d1d28075" (UID: "e9332027-dcd4-40b0-862f-ca03d1d28075"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:09:39 crc kubenswrapper[4752]: I0122 11:09:39.745156 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "e9332027-dcd4-40b0-862f-ca03d1d28075" (UID: "e9332027-dcd4-40b0-862f-ca03d1d28075"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:09:39 crc kubenswrapper[4752]: I0122 11:09:39.749343 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "e9332027-dcd4-40b0-862f-ca03d1d28075" (UID: "e9332027-dcd4-40b0-862f-ca03d1d28075"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:09:39 crc kubenswrapper[4752]: I0122 11:09:39.750872 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "e9332027-dcd4-40b0-862f-ca03d1d28075" (UID: "e9332027-dcd4-40b0-862f-ca03d1d28075"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:09:39 crc kubenswrapper[4752]: I0122 11:09:39.767395 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e9332027-dcd4-40b0-862f-ca03d1d28075" (UID: "e9332027-dcd4-40b0-862f-ca03d1d28075"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:09:39 crc kubenswrapper[4752]: I0122 11:09:39.767828 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "e9332027-dcd4-40b0-862f-ca03d1d28075" (UID: "e9332027-dcd4-40b0-862f-ca03d1d28075"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:09:39 crc kubenswrapper[4752]: I0122 11:09:39.769880 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-inventory" (OuterVolumeSpecName: "inventory") pod "e9332027-dcd4-40b0-862f-ca03d1d28075" (UID: "e9332027-dcd4-40b0-862f-ca03d1d28075"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:09:39 crc kubenswrapper[4752]: I0122 11:09:39.777996 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "e9332027-dcd4-40b0-862f-ca03d1d28075" (UID: "e9332027-dcd4-40b0-862f-ca03d1d28075"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:09:39 crc kubenswrapper[4752]: I0122 11:09:39.812284 4752 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 22 11:09:39 crc kubenswrapper[4752]: I0122 11:09:39.812326 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmpq8\" (UniqueName: \"kubernetes.io/projected/e9332027-dcd4-40b0-862f-ca03d1d28075-kube-api-access-tmpq8\") on node \"crc\" DevicePath \"\"" Jan 22 11:09:39 crc kubenswrapper[4752]: I0122 11:09:39.812339 4752 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 22 11:09:39 crc kubenswrapper[4752]: I0122 11:09:39.812351 4752 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 22 11:09:39 crc kubenswrapper[4752]: I0122 11:09:39.812363 4752 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:09:39 crc kubenswrapper[4752]: I0122 11:09:39.812375 4752 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 22 11:09:39 crc kubenswrapper[4752]: I0122 11:09:39.812390 4752 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 11:09:39 crc kubenswrapper[4752]: I0122 11:09:39.812401 4752 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 22 11:09:39 crc kubenswrapper[4752]: I0122 11:09:39.812412 4752 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9332027-dcd4-40b0-862f-ca03d1d28075-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.169317 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" event={"ID":"e9332027-dcd4-40b0-862f-ca03d1d28075","Type":"ContainerDied","Data":"dd7b491e496eeaae6627b2ced1469fc69ba456b530dc75a280bd04d8e573d399"} Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.169378 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nl6wz" Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.169383 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd7b491e496eeaae6627b2ced1469fc69ba456b530dc75a280bd04d8e573d399" Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.389409 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v"] Jan 22 11:09:40 crc kubenswrapper[4752]: E0122 11:09:40.390264 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9332027-dcd4-40b0-862f-ca03d1d28075" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.390293 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9332027-dcd4-40b0-862f-ca03d1d28075" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.390700 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9332027-dcd4-40b0-862f-ca03d1d28075" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.392482 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v" Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.417206 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.417464 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.419402 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.419630 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.419968 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j8c6q" Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.434459 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v\" (UID: \"a8d90369-1785-435b-a724-db9ed8e6c5a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v" Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.434635 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v\" (UID: \"a8d90369-1785-435b-a724-db9ed8e6c5a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v" Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.434674 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v\" (UID: \"a8d90369-1785-435b-a724-db9ed8e6c5a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v" Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.434714 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v\" (UID: \"a8d90369-1785-435b-a724-db9ed8e6c5a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v" Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.434760 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v\" (UID: \"a8d90369-1785-435b-a724-db9ed8e6c5a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v" Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.434805 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnpmc\" (UniqueName: \"kubernetes.io/projected/a8d90369-1785-435b-a724-db9ed8e6c5a3-kube-api-access-lnpmc\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v\" (UID: \"a8d90369-1785-435b-a724-db9ed8e6c5a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v" Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.434875 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v\" (UID: \"a8d90369-1785-435b-a724-db9ed8e6c5a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v" Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.492869 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v"] Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.536102 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v\" (UID: \"a8d90369-1785-435b-a724-db9ed8e6c5a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v" Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.536642 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v\" (UID: \"a8d90369-1785-435b-a724-db9ed8e6c5a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v" Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.536667 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v\" (UID: \"a8d90369-1785-435b-a724-db9ed8e6c5a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v" Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.536689 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v\" (UID: \"a8d90369-1785-435b-a724-db9ed8e6c5a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v" Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.536718 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v\" (UID: \"a8d90369-1785-435b-a724-db9ed8e6c5a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v" Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.536743 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnpmc\" (UniqueName: \"kubernetes.io/projected/a8d90369-1785-435b-a724-db9ed8e6c5a3-kube-api-access-lnpmc\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v\" (UID: \"a8d90369-1785-435b-a724-db9ed8e6c5a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v" Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.536809 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v\" (UID: \"a8d90369-1785-435b-a724-db9ed8e6c5a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v" Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.540843 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v\" (UID: \"a8d90369-1785-435b-a724-db9ed8e6c5a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v" Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.540950 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v\" (UID: \"a8d90369-1785-435b-a724-db9ed8e6c5a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v" Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.545536 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v\" (UID: \"a8d90369-1785-435b-a724-db9ed8e6c5a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v" Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.545562 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v\" (UID: \"a8d90369-1785-435b-a724-db9ed8e6c5a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v" Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.553328 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v\" (UID: \"a8d90369-1785-435b-a724-db9ed8e6c5a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v" Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.553620 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v\" (UID: \"a8d90369-1785-435b-a724-db9ed8e6c5a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v" Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.556164 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnpmc\" (UniqueName: \"kubernetes.io/projected/a8d90369-1785-435b-a724-db9ed8e6c5a3-kube-api-access-lnpmc\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v\" (UID: \"a8d90369-1785-435b-a724-db9ed8e6c5a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v" Jan 22 11:09:40 crc kubenswrapper[4752]: I0122 11:09:40.762957 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v" Jan 22 11:09:41 crc kubenswrapper[4752]: I0122 11:09:41.316064 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v"] Jan 22 11:09:41 crc kubenswrapper[4752]: I0122 11:09:41.326016 4752 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 11:09:42 crc kubenswrapper[4752]: I0122 11:09:42.208970 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v" event={"ID":"a8d90369-1785-435b-a724-db9ed8e6c5a3","Type":"ContainerStarted","Data":"f7c7cf34e4a52970902354def60d88ff4e3da6b4e16bc77d6fce25df3886bb48"} Jan 22 11:09:43 crc kubenswrapper[4752]: I0122 11:09:43.222390 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v" event={"ID":"a8d90369-1785-435b-a724-db9ed8e6c5a3","Type":"ContainerStarted","Data":"63868111a0f3e1e8aae536f92989e7cd311cee9d3eef54cbf98d1e0bceab7273"} Jan 22 11:09:43 crc kubenswrapper[4752]: I0122 11:09:43.255259 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v" podStartSLOduration=2.492437534 podStartE2EDuration="3.25523534s" podCreationTimestamp="2026-01-22 11:09:40 +0000 UTC" firstStartedPulling="2026-01-22 11:09:41.325730356 +0000 UTC m=+2660.555673264" lastFinishedPulling="2026-01-22 11:09:42.088528162 +0000 UTC m=+2661.318471070" observedRunningTime="2026-01-22 11:09:43.254474771 +0000 UTC m=+2662.484417719" watchObservedRunningTime="2026-01-22 11:09:43.25523534 +0000 UTC m=+2662.485178258" Jan 22 11:09:49 crc kubenswrapper[4752]: I0122 11:09:49.097496 4752 scope.go:117] "RemoveContainer" containerID="d8d4a0367b441aa318ea13a556d1d9c716143e5abe9c51339a1b6127faa71c61" Jan 22 11:09:49 crc kubenswrapper[4752]: E0122 11:09:49.098274 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:10:03 crc kubenswrapper[4752]: I0122 11:10:03.098552 4752 scope.go:117] "RemoveContainer" containerID="d8d4a0367b441aa318ea13a556d1d9c716143e5abe9c51339a1b6127faa71c61" Jan 22 11:10:03 crc kubenswrapper[4752]: E0122 11:10:03.099412 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:10:17 crc kubenswrapper[4752]: I0122 11:10:17.098325 4752 scope.go:117] "RemoveContainer" containerID="d8d4a0367b441aa318ea13a556d1d9c716143e5abe9c51339a1b6127faa71c61" Jan 22 11:10:17 crc kubenswrapper[4752]: E0122 11:10:17.098991 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:10:28 crc kubenswrapper[4752]: I0122 11:10:28.098228 4752 scope.go:117] "RemoveContainer" containerID="d8d4a0367b441aa318ea13a556d1d9c716143e5abe9c51339a1b6127faa71c61" Jan 22 11:10:29 crc kubenswrapper[4752]: I0122 11:10:29.681727 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerStarted","Data":"a3a05878df63bd7422e6a5f85a14142398ffaecda9556b5a85b69b9b9c85dc24"} Jan 22 11:12:07 crc kubenswrapper[4752]: I0122 11:12:07.726563 4752 generic.go:334] "Generic (PLEG): container finished" podID="a8d90369-1785-435b-a724-db9ed8e6c5a3" containerID="63868111a0f3e1e8aae536f92989e7cd311cee9d3eef54cbf98d1e0bceab7273" exitCode=0 Jan 22 11:12:07 crc kubenswrapper[4752]: I0122 11:12:07.726664 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v" event={"ID":"a8d90369-1785-435b-a724-db9ed8e6c5a3","Type":"ContainerDied","Data":"63868111a0f3e1e8aae536f92989e7cd311cee9d3eef54cbf98d1e0bceab7273"} Jan 22 11:12:09 crc kubenswrapper[4752]: I0122 11:12:09.273483 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v" Jan 22 11:12:09 crc kubenswrapper[4752]: I0122 11:12:09.338887 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-inventory\") pod \"a8d90369-1785-435b-a724-db9ed8e6c5a3\" (UID: \"a8d90369-1785-435b-a724-db9ed8e6c5a3\") " Jan 22 11:12:09 crc kubenswrapper[4752]: I0122 11:12:09.338973 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-ssh-key-openstack-edpm-ipam\") pod \"a8d90369-1785-435b-a724-db9ed8e6c5a3\" (UID: \"a8d90369-1785-435b-a724-db9ed8e6c5a3\") " Jan 22 11:12:09 crc kubenswrapper[4752]: I0122 11:12:09.339057 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-ceilometer-compute-config-data-1\") pod \"a8d90369-1785-435b-a724-db9ed8e6c5a3\" (UID: \"a8d90369-1785-435b-a724-db9ed8e6c5a3\") " Jan 22 11:12:09 crc kubenswrapper[4752]: I0122 11:12:09.339227 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-ceilometer-compute-config-data-0\") pod \"a8d90369-1785-435b-a724-db9ed8e6c5a3\" (UID: \"a8d90369-1785-435b-a724-db9ed8e6c5a3\") " Jan 22 11:12:09 crc kubenswrapper[4752]: I0122 11:12:09.339264 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-telemetry-combined-ca-bundle\") pod \"a8d90369-1785-435b-a724-db9ed8e6c5a3\" (UID: \"a8d90369-1785-435b-a724-db9ed8e6c5a3\") " Jan 22 11:12:09 crc kubenswrapper[4752]: I0122 11:12:09.339315 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnpmc\" (UniqueName: \"kubernetes.io/projected/a8d90369-1785-435b-a724-db9ed8e6c5a3-kube-api-access-lnpmc\") pod \"a8d90369-1785-435b-a724-db9ed8e6c5a3\" (UID: \"a8d90369-1785-435b-a724-db9ed8e6c5a3\") " Jan 22 11:12:09 crc kubenswrapper[4752]: I0122 11:12:09.339464 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-ceilometer-compute-config-data-2\") pod \"a8d90369-1785-435b-a724-db9ed8e6c5a3\" (UID: \"a8d90369-1785-435b-a724-db9ed8e6c5a3\") " Jan 22 11:12:09 crc kubenswrapper[4752]: I0122 11:12:09.346547 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8d90369-1785-435b-a724-db9ed8e6c5a3-kube-api-access-lnpmc" (OuterVolumeSpecName: "kube-api-access-lnpmc") pod "a8d90369-1785-435b-a724-db9ed8e6c5a3" (UID: "a8d90369-1785-435b-a724-db9ed8e6c5a3"). InnerVolumeSpecName "kube-api-access-lnpmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:12:09 crc kubenswrapper[4752]: I0122 11:12:09.346727 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "a8d90369-1785-435b-a724-db9ed8e6c5a3" (UID: "a8d90369-1785-435b-a724-db9ed8e6c5a3"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:12:09 crc kubenswrapper[4752]: I0122 11:12:09.388195 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a8d90369-1785-435b-a724-db9ed8e6c5a3" (UID: "a8d90369-1785-435b-a724-db9ed8e6c5a3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:12:09 crc kubenswrapper[4752]: I0122 11:12:09.392205 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-inventory" (OuterVolumeSpecName: "inventory") pod "a8d90369-1785-435b-a724-db9ed8e6c5a3" (UID: "a8d90369-1785-435b-a724-db9ed8e6c5a3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:12:09 crc kubenswrapper[4752]: I0122 11:12:09.392903 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "a8d90369-1785-435b-a724-db9ed8e6c5a3" (UID: "a8d90369-1785-435b-a724-db9ed8e6c5a3"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:12:09 crc kubenswrapper[4752]: I0122 11:12:09.396564 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "a8d90369-1785-435b-a724-db9ed8e6c5a3" (UID: "a8d90369-1785-435b-a724-db9ed8e6c5a3"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:12:09 crc kubenswrapper[4752]: I0122 11:12:09.397191 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "a8d90369-1785-435b-a724-db9ed8e6c5a3" (UID: "a8d90369-1785-435b-a724-db9ed8e6c5a3"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:12:09 crc kubenswrapper[4752]: I0122 11:12:09.442544 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnpmc\" (UniqueName: \"kubernetes.io/projected/a8d90369-1785-435b-a724-db9ed8e6c5a3-kube-api-access-lnpmc\") on node \"crc\" DevicePath \"\"" Jan 22 11:12:09 crc kubenswrapper[4752]: I0122 11:12:09.442599 4752 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 22 11:12:09 crc kubenswrapper[4752]: I0122 11:12:09.442623 4752 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 22 11:12:09 crc kubenswrapper[4752]: I0122 11:12:09.442645 4752 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-inventory\") on node \"crc\" DevicePath \"\"" Jan 22 11:12:09 crc kubenswrapper[4752]: I0122 11:12:09.442664 4752 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 22 11:12:09 crc kubenswrapper[4752]: I0122 11:12:09.442682 4752 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 22 11:12:09 crc kubenswrapper[4752]: I0122 11:12:09.442701 4752 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8d90369-1785-435b-a724-db9ed8e6c5a3-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:12:09 crc kubenswrapper[4752]: I0122 11:12:09.754769 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v" event={"ID":"a8d90369-1785-435b-a724-db9ed8e6c5a3","Type":"ContainerDied","Data":"f7c7cf34e4a52970902354def60d88ff4e3da6b4e16bc77d6fce25df3886bb48"} Jan 22 11:12:09 crc kubenswrapper[4752]: I0122 11:12:09.754813 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7c7cf34e4a52970902354def60d88ff4e3da6b4e16bc77d6fce25df3886bb48" Jan 22 11:12:09 crc kubenswrapper[4752]: I0122 11:12:09.754876 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cpr8v" Jan 22 11:12:19 crc kubenswrapper[4752]: I0122 11:12:19.133827 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mrh6q"] Jan 22 11:12:19 crc kubenswrapper[4752]: E0122 11:12:19.135394 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8d90369-1785-435b-a724-db9ed8e6c5a3" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 22 11:12:19 crc kubenswrapper[4752]: I0122 11:12:19.135425 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8d90369-1785-435b-a724-db9ed8e6c5a3" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 22 11:12:19 crc kubenswrapper[4752]: I0122 11:12:19.135947 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8d90369-1785-435b-a724-db9ed8e6c5a3" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 22 11:12:19 crc kubenswrapper[4752]: I0122 11:12:19.139098 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mrh6q" Jan 22 11:12:19 crc kubenswrapper[4752]: I0122 11:12:19.149559 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mrh6q"] Jan 22 11:12:19 crc kubenswrapper[4752]: I0122 11:12:19.257325 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlltf\" (UniqueName: \"kubernetes.io/projected/77ba5ca9-9ef8-4014-92cf-62c71bc66b66-kube-api-access-qlltf\") pod \"redhat-operators-mrh6q\" (UID: \"77ba5ca9-9ef8-4014-92cf-62c71bc66b66\") " pod="openshift-marketplace/redhat-operators-mrh6q" Jan 22 11:12:19 crc kubenswrapper[4752]: I0122 11:12:19.257408 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77ba5ca9-9ef8-4014-92cf-62c71bc66b66-utilities\") pod \"redhat-operators-mrh6q\" (UID: \"77ba5ca9-9ef8-4014-92cf-62c71bc66b66\") " pod="openshift-marketplace/redhat-operators-mrh6q" Jan 22 11:12:19 crc kubenswrapper[4752]: I0122 11:12:19.257502 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77ba5ca9-9ef8-4014-92cf-62c71bc66b66-catalog-content\") pod \"redhat-operators-mrh6q\" (UID: \"77ba5ca9-9ef8-4014-92cf-62c71bc66b66\") " pod="openshift-marketplace/redhat-operators-mrh6q" Jan 22 11:12:19 crc kubenswrapper[4752]: I0122 11:12:19.359084 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77ba5ca9-9ef8-4014-92cf-62c71bc66b66-catalog-content\") pod \"redhat-operators-mrh6q\" (UID: \"77ba5ca9-9ef8-4014-92cf-62c71bc66b66\") " pod="openshift-marketplace/redhat-operators-mrh6q" Jan 22 11:12:19 crc kubenswrapper[4752]: I0122 11:12:19.359214 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlltf\" (UniqueName: \"kubernetes.io/projected/77ba5ca9-9ef8-4014-92cf-62c71bc66b66-kube-api-access-qlltf\") pod \"redhat-operators-mrh6q\" (UID: \"77ba5ca9-9ef8-4014-92cf-62c71bc66b66\") " pod="openshift-marketplace/redhat-operators-mrh6q" Jan 22 11:12:19 crc kubenswrapper[4752]: I0122 11:12:19.359252 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77ba5ca9-9ef8-4014-92cf-62c71bc66b66-utilities\") pod \"redhat-operators-mrh6q\" (UID: \"77ba5ca9-9ef8-4014-92cf-62c71bc66b66\") " pod="openshift-marketplace/redhat-operators-mrh6q" Jan 22 11:12:19 crc kubenswrapper[4752]: I0122 11:12:19.359752 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77ba5ca9-9ef8-4014-92cf-62c71bc66b66-catalog-content\") pod \"redhat-operators-mrh6q\" (UID: \"77ba5ca9-9ef8-4014-92cf-62c71bc66b66\") " pod="openshift-marketplace/redhat-operators-mrh6q" Jan 22 11:12:19 crc kubenswrapper[4752]: I0122 11:12:19.359909 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77ba5ca9-9ef8-4014-92cf-62c71bc66b66-utilities\") pod \"redhat-operators-mrh6q\" (UID: \"77ba5ca9-9ef8-4014-92cf-62c71bc66b66\") " pod="openshift-marketplace/redhat-operators-mrh6q" Jan 22 11:12:19 crc kubenswrapper[4752]: I0122 11:12:19.378495 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlltf\" (UniqueName: \"kubernetes.io/projected/77ba5ca9-9ef8-4014-92cf-62c71bc66b66-kube-api-access-qlltf\") pod \"redhat-operators-mrh6q\" (UID: \"77ba5ca9-9ef8-4014-92cf-62c71bc66b66\") " pod="openshift-marketplace/redhat-operators-mrh6q" Jan 22 11:12:19 crc kubenswrapper[4752]: I0122 11:12:19.500784 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mrh6q" Jan 22 11:12:19 crc kubenswrapper[4752]: I0122 11:12:19.968122 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mrh6q"] Jan 22 11:12:20 crc kubenswrapper[4752]: I0122 11:12:20.056007 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrh6q" event={"ID":"77ba5ca9-9ef8-4014-92cf-62c71bc66b66","Type":"ContainerStarted","Data":"47a8995c2200fa9c8afbe4bba89b98057120d2d7c3c94d034581cfb3739d50dc"} Jan 22 11:12:21 crc kubenswrapper[4752]: I0122 11:12:21.075338 4752 generic.go:334] "Generic (PLEG): container finished" podID="77ba5ca9-9ef8-4014-92cf-62c71bc66b66" containerID="0b1f1fa615312188a4499ffbb52067f20673084b2643a8119ddb5e6847189e35" exitCode=0 Jan 22 11:12:21 crc kubenswrapper[4752]: I0122 11:12:21.075436 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrh6q" event={"ID":"77ba5ca9-9ef8-4014-92cf-62c71bc66b66","Type":"ContainerDied","Data":"0b1f1fa615312188a4499ffbb52067f20673084b2643a8119ddb5e6847189e35"} Jan 22 11:12:23 crc kubenswrapper[4752]: I0122 11:12:23.110773 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrh6q" event={"ID":"77ba5ca9-9ef8-4014-92cf-62c71bc66b66","Type":"ContainerStarted","Data":"95057b96355032d91a77026ca9aee75ba14ef070d535dcab4884c5b233fb359b"} Jan 22 11:12:27 crc kubenswrapper[4752]: I0122 11:12:27.139901 4752 generic.go:334] "Generic (PLEG): container finished" podID="77ba5ca9-9ef8-4014-92cf-62c71bc66b66" containerID="95057b96355032d91a77026ca9aee75ba14ef070d535dcab4884c5b233fb359b" exitCode=0 Jan 22 11:12:27 crc kubenswrapper[4752]: I0122 11:12:27.141481 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrh6q" event={"ID":"77ba5ca9-9ef8-4014-92cf-62c71bc66b66","Type":"ContainerDied","Data":"95057b96355032d91a77026ca9aee75ba14ef070d535dcab4884c5b233fb359b"} Jan 22 11:12:29 crc kubenswrapper[4752]: I0122 11:12:29.162311 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrh6q" event={"ID":"77ba5ca9-9ef8-4014-92cf-62c71bc66b66","Type":"ContainerStarted","Data":"d681781711def8219718971fcf495680e96ce86be22787d2520dfdbe7d17c536"} Jan 22 11:12:29 crc kubenswrapper[4752]: I0122 11:12:29.188536 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mrh6q" podStartSLOduration=2.980617831 podStartE2EDuration="10.188513749s" podCreationTimestamp="2026-01-22 11:12:19 +0000 UTC" firstStartedPulling="2026-01-22 11:12:21.077903883 +0000 UTC m=+2820.307846801" lastFinishedPulling="2026-01-22 11:12:28.285799781 +0000 UTC m=+2827.515742719" observedRunningTime="2026-01-22 11:12:29.185206054 +0000 UTC m=+2828.415149002" watchObservedRunningTime="2026-01-22 11:12:29.188513749 +0000 UTC m=+2828.418456677" Jan 22 11:12:29 crc kubenswrapper[4752]: I0122 11:12:29.501042 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mrh6q" Jan 22 11:12:29 crc kubenswrapper[4752]: I0122 11:12:29.501082 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mrh6q" Jan 22 11:12:30 crc kubenswrapper[4752]: I0122 11:12:30.581132 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mrh6q" podUID="77ba5ca9-9ef8-4014-92cf-62c71bc66b66" containerName="registry-server" probeResult="failure" output=< Jan 22 11:12:30 crc kubenswrapper[4752]: timeout: failed to connect service ":50051" within 1s Jan 22 11:12:30 crc kubenswrapper[4752]: > Jan 22 11:12:39 crc kubenswrapper[4752]: I0122 11:12:39.563227 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mrh6q" Jan 22 11:12:39 crc kubenswrapper[4752]: I0122 11:12:39.617597 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mrh6q" Jan 22 11:12:39 crc kubenswrapper[4752]: I0122 11:12:39.808345 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mrh6q"] Jan 22 11:12:41 crc kubenswrapper[4752]: I0122 11:12:41.283560 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mrh6q" podUID="77ba5ca9-9ef8-4014-92cf-62c71bc66b66" containerName="registry-server" containerID="cri-o://d681781711def8219718971fcf495680e96ce86be22787d2520dfdbe7d17c536" gracePeriod=2 Jan 22 11:12:41 crc kubenswrapper[4752]: I0122 11:12:41.693246 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mrh6q" Jan 22 11:12:41 crc kubenswrapper[4752]: I0122 11:12:41.766412 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77ba5ca9-9ef8-4014-92cf-62c71bc66b66-catalog-content\") pod \"77ba5ca9-9ef8-4014-92cf-62c71bc66b66\" (UID: \"77ba5ca9-9ef8-4014-92cf-62c71bc66b66\") " Jan 22 11:12:41 crc kubenswrapper[4752]: I0122 11:12:41.766656 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77ba5ca9-9ef8-4014-92cf-62c71bc66b66-utilities\") pod \"77ba5ca9-9ef8-4014-92cf-62c71bc66b66\" (UID: \"77ba5ca9-9ef8-4014-92cf-62c71bc66b66\") " Jan 22 11:12:41 crc kubenswrapper[4752]: I0122 11:12:41.766736 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlltf\" (UniqueName: \"kubernetes.io/projected/77ba5ca9-9ef8-4014-92cf-62c71bc66b66-kube-api-access-qlltf\") pod \"77ba5ca9-9ef8-4014-92cf-62c71bc66b66\" (UID: \"77ba5ca9-9ef8-4014-92cf-62c71bc66b66\") " Jan 22 11:12:41 crc kubenswrapper[4752]: I0122 11:12:41.767186 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77ba5ca9-9ef8-4014-92cf-62c71bc66b66-utilities" (OuterVolumeSpecName: "utilities") pod "77ba5ca9-9ef8-4014-92cf-62c71bc66b66" (UID: "77ba5ca9-9ef8-4014-92cf-62c71bc66b66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:12:41 crc kubenswrapper[4752]: I0122 11:12:41.767445 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77ba5ca9-9ef8-4014-92cf-62c71bc66b66-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:12:41 crc kubenswrapper[4752]: I0122 11:12:41.772920 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77ba5ca9-9ef8-4014-92cf-62c71bc66b66-kube-api-access-qlltf" (OuterVolumeSpecName: "kube-api-access-qlltf") pod "77ba5ca9-9ef8-4014-92cf-62c71bc66b66" (UID: "77ba5ca9-9ef8-4014-92cf-62c71bc66b66"). InnerVolumeSpecName "kube-api-access-qlltf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:12:41 crc kubenswrapper[4752]: I0122 11:12:41.869357 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlltf\" (UniqueName: \"kubernetes.io/projected/77ba5ca9-9ef8-4014-92cf-62c71bc66b66-kube-api-access-qlltf\") on node \"crc\" DevicePath \"\"" Jan 22 11:12:41 crc kubenswrapper[4752]: I0122 11:12:41.873578 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77ba5ca9-9ef8-4014-92cf-62c71bc66b66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "77ba5ca9-9ef8-4014-92cf-62c71bc66b66" (UID: "77ba5ca9-9ef8-4014-92cf-62c71bc66b66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:12:41 crc kubenswrapper[4752]: I0122 11:12:41.970600 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77ba5ca9-9ef8-4014-92cf-62c71bc66b66-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:12:42 crc kubenswrapper[4752]: I0122 11:12:42.296472 4752 generic.go:334] "Generic (PLEG): container finished" podID="77ba5ca9-9ef8-4014-92cf-62c71bc66b66" containerID="d681781711def8219718971fcf495680e96ce86be22787d2520dfdbe7d17c536" exitCode=0 Jan 22 11:12:42 crc kubenswrapper[4752]: I0122 11:12:42.296534 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrh6q" event={"ID":"77ba5ca9-9ef8-4014-92cf-62c71bc66b66","Type":"ContainerDied","Data":"d681781711def8219718971fcf495680e96ce86be22787d2520dfdbe7d17c536"} Jan 22 11:12:42 crc kubenswrapper[4752]: I0122 11:12:42.296572 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrh6q" event={"ID":"77ba5ca9-9ef8-4014-92cf-62c71bc66b66","Type":"ContainerDied","Data":"47a8995c2200fa9c8afbe4bba89b98057120d2d7c3c94d034581cfb3739d50dc"} Jan 22 11:12:42 crc kubenswrapper[4752]: I0122 11:12:42.296612 4752 scope.go:117] "RemoveContainer" containerID="d681781711def8219718971fcf495680e96ce86be22787d2520dfdbe7d17c536" Jan 22 11:12:42 crc kubenswrapper[4752]: I0122 11:12:42.296803 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mrh6q" Jan 22 11:12:42 crc kubenswrapper[4752]: I0122 11:12:42.330549 4752 scope.go:117] "RemoveContainer" containerID="95057b96355032d91a77026ca9aee75ba14ef070d535dcab4884c5b233fb359b" Jan 22 11:12:42 crc kubenswrapper[4752]: I0122 11:12:42.340153 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mrh6q"] Jan 22 11:12:42 crc kubenswrapper[4752]: I0122 11:12:42.354358 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mrh6q"] Jan 22 11:12:42 crc kubenswrapper[4752]: I0122 11:12:42.362049 4752 scope.go:117] "RemoveContainer" containerID="0b1f1fa615312188a4499ffbb52067f20673084b2643a8119ddb5e6847189e35" Jan 22 11:12:42 crc kubenswrapper[4752]: I0122 11:12:42.427378 4752 scope.go:117] "RemoveContainer" containerID="d681781711def8219718971fcf495680e96ce86be22787d2520dfdbe7d17c536" Jan 22 11:12:42 crc kubenswrapper[4752]: E0122 11:12:42.428257 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d681781711def8219718971fcf495680e96ce86be22787d2520dfdbe7d17c536\": container with ID starting with d681781711def8219718971fcf495680e96ce86be22787d2520dfdbe7d17c536 not found: ID does not exist" containerID="d681781711def8219718971fcf495680e96ce86be22787d2520dfdbe7d17c536" Jan 22 11:12:42 crc kubenswrapper[4752]: I0122 11:12:42.428297 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d681781711def8219718971fcf495680e96ce86be22787d2520dfdbe7d17c536"} err="failed to get container status \"d681781711def8219718971fcf495680e96ce86be22787d2520dfdbe7d17c536\": rpc error: code = NotFound desc = could not find container \"d681781711def8219718971fcf495680e96ce86be22787d2520dfdbe7d17c536\": container with ID starting with d681781711def8219718971fcf495680e96ce86be22787d2520dfdbe7d17c536 not found: ID does not exist" Jan 22 11:12:42 crc kubenswrapper[4752]: I0122 11:12:42.428340 4752 scope.go:117] "RemoveContainer" containerID="95057b96355032d91a77026ca9aee75ba14ef070d535dcab4884c5b233fb359b" Jan 22 11:12:42 crc kubenswrapper[4752]: E0122 11:12:42.428761 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95057b96355032d91a77026ca9aee75ba14ef070d535dcab4884c5b233fb359b\": container with ID starting with 95057b96355032d91a77026ca9aee75ba14ef070d535dcab4884c5b233fb359b not found: ID does not exist" containerID="95057b96355032d91a77026ca9aee75ba14ef070d535dcab4884c5b233fb359b" Jan 22 11:12:42 crc kubenswrapper[4752]: I0122 11:12:42.428830 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95057b96355032d91a77026ca9aee75ba14ef070d535dcab4884c5b233fb359b"} err="failed to get container status \"95057b96355032d91a77026ca9aee75ba14ef070d535dcab4884c5b233fb359b\": rpc error: code = NotFound desc = could not find container \"95057b96355032d91a77026ca9aee75ba14ef070d535dcab4884c5b233fb359b\": container with ID starting with 95057b96355032d91a77026ca9aee75ba14ef070d535dcab4884c5b233fb359b not found: ID does not exist" Jan 22 11:12:42 crc kubenswrapper[4752]: I0122 11:12:42.428884 4752 scope.go:117] "RemoveContainer" containerID="0b1f1fa615312188a4499ffbb52067f20673084b2643a8119ddb5e6847189e35" Jan 22 11:12:42 crc kubenswrapper[4752]: E0122 11:12:42.429318 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b1f1fa615312188a4499ffbb52067f20673084b2643a8119ddb5e6847189e35\": container with ID starting with 0b1f1fa615312188a4499ffbb52067f20673084b2643a8119ddb5e6847189e35 not found: ID does not exist" containerID="0b1f1fa615312188a4499ffbb52067f20673084b2643a8119ddb5e6847189e35" Jan 22 11:12:42 crc kubenswrapper[4752]: I0122 11:12:42.429347 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b1f1fa615312188a4499ffbb52067f20673084b2643a8119ddb5e6847189e35"} err="failed to get container status \"0b1f1fa615312188a4499ffbb52067f20673084b2643a8119ddb5e6847189e35\": rpc error: code = NotFound desc = could not find container \"0b1f1fa615312188a4499ffbb52067f20673084b2643a8119ddb5e6847189e35\": container with ID starting with 0b1f1fa615312188a4499ffbb52067f20673084b2643a8119ddb5e6847189e35 not found: ID does not exist" Jan 22 11:12:43 crc kubenswrapper[4752]: I0122 11:12:43.112802 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77ba5ca9-9ef8-4014-92cf-62c71bc66b66" path="/var/lib/kubelet/pods/77ba5ca9-9ef8-4014-92cf-62c71bc66b66/volumes" Jan 22 11:12:46 crc kubenswrapper[4752]: I0122 11:12:46.944069 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 22 11:12:46 crc kubenswrapper[4752]: E0122 11:12:46.944825 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77ba5ca9-9ef8-4014-92cf-62c71bc66b66" containerName="extract-content" Jan 22 11:12:46 crc kubenswrapper[4752]: I0122 11:12:46.944837 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="77ba5ca9-9ef8-4014-92cf-62c71bc66b66" containerName="extract-content" Jan 22 11:12:46 crc kubenswrapper[4752]: E0122 11:12:46.944848 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77ba5ca9-9ef8-4014-92cf-62c71bc66b66" containerName="registry-server" Jan 22 11:12:46 crc kubenswrapper[4752]: I0122 11:12:46.944867 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="77ba5ca9-9ef8-4014-92cf-62c71bc66b66" containerName="registry-server" Jan 22 11:12:46 crc kubenswrapper[4752]: E0122 11:12:46.944878 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77ba5ca9-9ef8-4014-92cf-62c71bc66b66" containerName="extract-utilities" Jan 22 11:12:46 crc kubenswrapper[4752]: I0122 11:12:46.944884 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="77ba5ca9-9ef8-4014-92cf-62c71bc66b66" containerName="extract-utilities" Jan 22 11:12:46 crc kubenswrapper[4752]: I0122 11:12:46.945089 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="77ba5ca9-9ef8-4014-92cf-62c71bc66b66" containerName="registry-server" Jan 22 11:12:46 crc kubenswrapper[4752]: I0122 11:12:46.946044 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 22 11:12:46 crc kubenswrapper[4752]: I0122 11:12:46.951221 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 22 11:12:46 crc kubenswrapper[4752]: I0122 11:12:46.959017 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 22 11:12:46 crc kubenswrapper[4752]: I0122 11:12:46.991155 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84ec4777-bffa-48d9-aacf-3ad166e72c84-config-data\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:46 crc kubenswrapper[4752]: I0122 11:12:46.991628 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/84ec4777-bffa-48d9-aacf-3ad166e72c84-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:46 crc kubenswrapper[4752]: I0122 11:12:46.991714 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84ec4777-bffa-48d9-aacf-3ad166e72c84-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:46 crc kubenswrapper[4752]: I0122 11:12:46.991793 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/84ec4777-bffa-48d9-aacf-3ad166e72c84-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:46 crc kubenswrapper[4752]: I0122 11:12:46.991913 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/84ec4777-bffa-48d9-aacf-3ad166e72c84-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:46 crc kubenswrapper[4752]: I0122 11:12:46.992044 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84ec4777-bffa-48d9-aacf-3ad166e72c84-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:46 crc kubenswrapper[4752]: I0122 11:12:46.992151 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84ec4777-bffa-48d9-aacf-3ad166e72c84-scripts\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:46 crc kubenswrapper[4752]: I0122 11:12:46.992233 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/84ec4777-bffa-48d9-aacf-3ad166e72c84-dev\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:46 crc kubenswrapper[4752]: I0122 11:12:46.992305 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/84ec4777-bffa-48d9-aacf-3ad166e72c84-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:46 crc kubenswrapper[4752]: I0122 11:12:46.992376 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/84ec4777-bffa-48d9-aacf-3ad166e72c84-sys\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:46 crc kubenswrapper[4752]: I0122 11:12:46.992448 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84ec4777-bffa-48d9-aacf-3ad166e72c84-config-data-custom\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:46 crc kubenswrapper[4752]: I0122 11:12:46.992529 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84ec4777-bffa-48d9-aacf-3ad166e72c84-lib-modules\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:46 crc kubenswrapper[4752]: I0122 11:12:46.992609 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/84ec4777-bffa-48d9-aacf-3ad166e72c84-run\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:46 crc kubenswrapper[4752]: I0122 11:12:46.992682 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/84ec4777-bffa-48d9-aacf-3ad166e72c84-etc-nvme\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:46 crc kubenswrapper[4752]: I0122 11:12:46.992757 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vjx6\" (UniqueName: \"kubernetes.io/projected/84ec4777-bffa-48d9-aacf-3ad166e72c84-kube-api-access-9vjx6\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.020298 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.023297 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.030149 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-config-data" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.037646 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.052928 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.054512 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.057263 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-2-config-data" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.087141 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.099544 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a8b857a-601d-445f-914f-85412b1fba46-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.099587 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a8b857a-601d-445f-914f-85412b1fba46-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.099607 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/fc55cd03-d309-466c-b2a7-d57bdcc8690b-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.099638 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84ec4777-bffa-48d9-aacf-3ad166e72c84-config-data\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.099661 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fc55cd03-d309-466c-b2a7-d57bdcc8690b-dev\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.099677 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a8b857a-601d-445f-914f-85412b1fba46-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.099703 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a8b857a-601d-445f-914f-85412b1fba46-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.099720 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5a8b857a-601d-445f-914f-85412b1fba46-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.099736 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fc55cd03-d309-466c-b2a7-d57bdcc8690b-sys\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.099758 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/84ec4777-bffa-48d9-aacf-3ad166e72c84-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.099776 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fc55cd03-d309-466c-b2a7-d57bdcc8690b-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.099791 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5a8b857a-601d-445f-914f-85412b1fba46-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.099808 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5a8b857a-601d-445f-914f-85412b1fba46-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.099824 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84ec4777-bffa-48d9-aacf-3ad166e72c84-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.099846 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/84ec4777-bffa-48d9-aacf-3ad166e72c84-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.099878 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5a8b857a-601d-445f-914f-85412b1fba46-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.099906 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc55cd03-d309-466c-b2a7-d57bdcc8690b-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.099923 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znv4s\" (UniqueName: \"kubernetes.io/projected/5a8b857a-601d-445f-914f-85412b1fba46-kube-api-access-znv4s\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.099942 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/84ec4777-bffa-48d9-aacf-3ad166e72c84-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.099967 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84ec4777-bffa-48d9-aacf-3ad166e72c84-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.099982 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5a8b857a-601d-445f-914f-85412b1fba46-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.100021 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc55cd03-d309-466c-b2a7-d57bdcc8690b-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.100039 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc55cd03-d309-466c-b2a7-d57bdcc8690b-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.100054 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84ec4777-bffa-48d9-aacf-3ad166e72c84-scripts\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.100069 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fc55cd03-d309-466c-b2a7-d57bdcc8690b-run\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.100083 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a8b857a-601d-445f-914f-85412b1fba46-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.100102 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a8b857a-601d-445f-914f-85412b1fba46-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.100125 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/84ec4777-bffa-48d9-aacf-3ad166e72c84-dev\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.100140 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fc55cd03-d309-466c-b2a7-d57bdcc8690b-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.100156 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc55cd03-d309-466c-b2a7-d57bdcc8690b-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.100171 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/fc55cd03-d309-466c-b2a7-d57bdcc8690b-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.100187 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/84ec4777-bffa-48d9-aacf-3ad166e72c84-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.100204 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/84ec4777-bffa-48d9-aacf-3ad166e72c84-sys\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.100218 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fc55cd03-d309-466c-b2a7-d57bdcc8690b-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.100233 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5a8b857a-601d-445f-914f-85412b1fba46-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.100251 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84ec4777-bffa-48d9-aacf-3ad166e72c84-config-data-custom\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.100267 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc55cd03-d309-466c-b2a7-d57bdcc8690b-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.100287 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbmdg\" (UniqueName: \"kubernetes.io/projected/fc55cd03-d309-466c-b2a7-d57bdcc8690b-kube-api-access-rbmdg\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.100305 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84ec4777-bffa-48d9-aacf-3ad166e72c84-lib-modules\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.100326 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5a8b857a-601d-445f-914f-85412b1fba46-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.100344 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/84ec4777-bffa-48d9-aacf-3ad166e72c84-run\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.100359 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/84ec4777-bffa-48d9-aacf-3ad166e72c84-etc-nvme\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.100374 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5a8b857a-601d-445f-914f-85412b1fba46-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.100389 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vjx6\" (UniqueName: \"kubernetes.io/projected/84ec4777-bffa-48d9-aacf-3ad166e72c84-kube-api-access-9vjx6\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.100406 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fc55cd03-d309-466c-b2a7-d57bdcc8690b-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.101254 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/84ec4777-bffa-48d9-aacf-3ad166e72c84-dev\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.101440 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/84ec4777-bffa-48d9-aacf-3ad166e72c84-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.101476 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/84ec4777-bffa-48d9-aacf-3ad166e72c84-sys\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.101504 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84ec4777-bffa-48d9-aacf-3ad166e72c84-lib-modules\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.102065 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/84ec4777-bffa-48d9-aacf-3ad166e72c84-etc-nvme\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.102152 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/84ec4777-bffa-48d9-aacf-3ad166e72c84-run\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.103930 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/84ec4777-bffa-48d9-aacf-3ad166e72c84-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.104121 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/84ec4777-bffa-48d9-aacf-3ad166e72c84-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.104186 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/84ec4777-bffa-48d9-aacf-3ad166e72c84-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.104236 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84ec4777-bffa-48d9-aacf-3ad166e72c84-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.108743 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84ec4777-bffa-48d9-aacf-3ad166e72c84-config-data-custom\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.126313 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84ec4777-bffa-48d9-aacf-3ad166e72c84-scripts\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.126608 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84ec4777-bffa-48d9-aacf-3ad166e72c84-config-data\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.127626 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84ec4777-bffa-48d9-aacf-3ad166e72c84-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.166276 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vjx6\" (UniqueName: \"kubernetes.io/projected/84ec4777-bffa-48d9-aacf-3ad166e72c84-kube-api-access-9vjx6\") pod \"cinder-backup-0\" (UID: \"84ec4777-bffa-48d9-aacf-3ad166e72c84\") " pod="openstack/cinder-backup-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.202145 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fc55cd03-d309-466c-b2a7-d57bdcc8690b-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.202382 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc55cd03-d309-466c-b2a7-d57bdcc8690b-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.202509 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/fc55cd03-d309-466c-b2a7-d57bdcc8690b-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.202613 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fc55cd03-d309-466c-b2a7-d57bdcc8690b-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.202707 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5a8b857a-601d-445f-914f-85412b1fba46-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.202800 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc55cd03-d309-466c-b2a7-d57bdcc8690b-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.202933 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbmdg\" (UniqueName: \"kubernetes.io/projected/fc55cd03-d309-466c-b2a7-d57bdcc8690b-kube-api-access-rbmdg\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.203047 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5a8b857a-601d-445f-914f-85412b1fba46-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.203139 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5a8b857a-601d-445f-914f-85412b1fba46-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.203240 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fc55cd03-d309-466c-b2a7-d57bdcc8690b-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.203336 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a8b857a-601d-445f-914f-85412b1fba46-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.203425 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a8b857a-601d-445f-914f-85412b1fba46-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.203522 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/fc55cd03-d309-466c-b2a7-d57bdcc8690b-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.203632 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fc55cd03-d309-466c-b2a7-d57bdcc8690b-dev\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.203721 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a8b857a-601d-445f-914f-85412b1fba46-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.203836 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a8b857a-601d-445f-914f-85412b1fba46-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.203936 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5a8b857a-601d-445f-914f-85412b1fba46-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.204031 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fc55cd03-d309-466c-b2a7-d57bdcc8690b-sys\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.204131 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fc55cd03-d309-466c-b2a7-d57bdcc8690b-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.204224 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5a8b857a-601d-445f-914f-85412b1fba46-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.204312 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5a8b857a-601d-445f-914f-85412b1fba46-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.204411 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5a8b857a-601d-445f-914f-85412b1fba46-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.204544 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc55cd03-d309-466c-b2a7-d57bdcc8690b-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.204634 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znv4s\" (UniqueName: \"kubernetes.io/projected/5a8b857a-601d-445f-914f-85412b1fba46-kube-api-access-znv4s\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.204754 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5a8b857a-601d-445f-914f-85412b1fba46-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.204896 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc55cd03-d309-466c-b2a7-d57bdcc8690b-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.204991 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc55cd03-d309-466c-b2a7-d57bdcc8690b-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.205068 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fc55cd03-d309-466c-b2a7-d57bdcc8690b-run\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.205133 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a8b857a-601d-445f-914f-85412b1fba46-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.205209 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a8b857a-601d-445f-914f-85412b1fba46-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.205365 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a8b857a-601d-445f-914f-85412b1fba46-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.205461 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fc55cd03-d309-466c-b2a7-d57bdcc8690b-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.206220 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc55cd03-d309-466c-b2a7-d57bdcc8690b-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.206302 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5a8b857a-601d-445f-914f-85412b1fba46-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.206331 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fc55cd03-d309-466c-b2a7-d57bdcc8690b-sys\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.206360 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fc55cd03-d309-466c-b2a7-d57bdcc8690b-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.206407 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5a8b857a-601d-445f-914f-85412b1fba46-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.206447 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5a8b857a-601d-445f-914f-85412b1fba46-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.206485 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5a8b857a-601d-445f-914f-85412b1fba46-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.207416 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/fc55cd03-d309-466c-b2a7-d57bdcc8690b-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.207458 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fc55cd03-d309-466c-b2a7-d57bdcc8690b-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.207472 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5a8b857a-601d-445f-914f-85412b1fba46-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.207486 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5a8b857a-601d-445f-914f-85412b1fba46-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.207650 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fc55cd03-d309-466c-b2a7-d57bdcc8690b-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.210482 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a8b857a-601d-445f-914f-85412b1fba46-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.211200 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a8b857a-601d-445f-914f-85412b1fba46-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.211255 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/fc55cd03-d309-466c-b2a7-d57bdcc8690b-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.211281 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fc55cd03-d309-466c-b2a7-d57bdcc8690b-dev\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.211303 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a8b857a-601d-445f-914f-85412b1fba46-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.211542 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5a8b857a-601d-445f-914f-85412b1fba46-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.211762 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5a8b857a-601d-445f-914f-85412b1fba46-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.212966 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fc55cd03-d309-466c-b2a7-d57bdcc8690b-run\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.216153 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc55cd03-d309-466c-b2a7-d57bdcc8690b-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.217218 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc55cd03-d309-466c-b2a7-d57bdcc8690b-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.219028 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc55cd03-d309-466c-b2a7-d57bdcc8690b-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.219866 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc55cd03-d309-466c-b2a7-d57bdcc8690b-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.220639 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a8b857a-601d-445f-914f-85412b1fba46-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.245936 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a8b857a-601d-445f-914f-85412b1fba46-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.246529 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znv4s\" (UniqueName: \"kubernetes.io/projected/5a8b857a-601d-445f-914f-85412b1fba46-kube-api-access-znv4s\") pod \"cinder-volume-nfs-2-0\" (UID: \"5a8b857a-601d-445f-914f-85412b1fba46\") " pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.248428 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbmdg\" (UniqueName: \"kubernetes.io/projected/fc55cd03-d309-466c-b2a7-d57bdcc8690b-kube-api-access-rbmdg\") pod \"cinder-volume-nfs-0\" (UID: \"fc55cd03-d309-466c-b2a7-d57bdcc8690b\") " pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.288347 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.356146 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.375658 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:47 crc kubenswrapper[4752]: I0122 11:12:47.985949 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 22 11:12:48 crc kubenswrapper[4752]: I0122 11:12:48.083698 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 22 11:12:48 crc kubenswrapper[4752]: W0122 11:12:48.084676 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a8b857a_601d_445f_914f_85412b1fba46.slice/crio-4206e77c5f3193dca6aa5f4095207fa962ab57dcd6b895dc59223d83147fc674 WatchSource:0}: Error finding container 4206e77c5f3193dca6aa5f4095207fa962ab57dcd6b895dc59223d83147fc674: Status 404 returned error can't find the container with id 4206e77c5f3193dca6aa5f4095207fa962ab57dcd6b895dc59223d83147fc674 Jan 22 11:12:48 crc kubenswrapper[4752]: I0122 11:12:48.373841 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"5a8b857a-601d-445f-914f-85412b1fba46","Type":"ContainerStarted","Data":"4206e77c5f3193dca6aa5f4095207fa962ab57dcd6b895dc59223d83147fc674"} Jan 22 11:12:48 crc kubenswrapper[4752]: I0122 11:12:48.374805 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"84ec4777-bffa-48d9-aacf-3ad166e72c84","Type":"ContainerStarted","Data":"d0b0286a25d6ddcfcea48a2102bff9a07a59bac1a0a0b3e60521c952e3efcc06"} Jan 22 11:12:49 crc kubenswrapper[4752]: I0122 11:12:49.147534 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 22 11:12:49 crc kubenswrapper[4752]: W0122 11:12:49.158128 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc55cd03_d309_466c_b2a7_d57bdcc8690b.slice/crio-f6c6642d87d44fbf967cc32e5a910634a27ea9b279ce2af6f7887887d305f070 WatchSource:0}: Error finding container f6c6642d87d44fbf967cc32e5a910634a27ea9b279ce2af6f7887887d305f070: Status 404 returned error can't find the container with id f6c6642d87d44fbf967cc32e5a910634a27ea9b279ce2af6f7887887d305f070 Jan 22 11:12:49 crc kubenswrapper[4752]: I0122 11:12:49.386690 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"84ec4777-bffa-48d9-aacf-3ad166e72c84","Type":"ContainerStarted","Data":"d89f868b6760fb423332a370632c9b1f7795595331aee2afed8090b2e2d82d83"} Jan 22 11:12:49 crc kubenswrapper[4752]: I0122 11:12:49.387097 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"84ec4777-bffa-48d9-aacf-3ad166e72c84","Type":"ContainerStarted","Data":"b241bbf23950ed80921bdc9571d295e0bfb91011732ad2a1952d5e5ef73212b6"} Jan 22 11:12:49 crc kubenswrapper[4752]: I0122 11:12:49.389357 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"5a8b857a-601d-445f-914f-85412b1fba46","Type":"ContainerStarted","Data":"f5337f0c1e46ca92e52a677c9343e76df9858c514d74736e3bb2bf68efc5129d"} Jan 22 11:12:49 crc kubenswrapper[4752]: I0122 11:12:49.389391 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"5a8b857a-601d-445f-914f-85412b1fba46","Type":"ContainerStarted","Data":"2a2226c073bec3bedd3b8308c9753d89903e82ef2af32989da8427b4393cf14d"} Jan 22 11:12:49 crc kubenswrapper[4752]: I0122 11:12:49.391912 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"fc55cd03-d309-466c-b2a7-d57bdcc8690b","Type":"ContainerStarted","Data":"f6c6642d87d44fbf967cc32e5a910634a27ea9b279ce2af6f7887887d305f070"} Jan 22 11:12:49 crc kubenswrapper[4752]: I0122 11:12:49.464129 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.053554713 podStartE2EDuration="3.464106405s" podCreationTimestamp="2026-01-22 11:12:46 +0000 UTC" firstStartedPulling="2026-01-22 11:12:48.00057812 +0000 UTC m=+2847.230521038" lastFinishedPulling="2026-01-22 11:12:48.411129812 +0000 UTC m=+2847.641072730" observedRunningTime="2026-01-22 11:12:49.438593457 +0000 UTC m=+2848.668536385" watchObservedRunningTime="2026-01-22 11:12:49.464106405 +0000 UTC m=+2848.694049303" Jan 22 11:12:49 crc kubenswrapper[4752]: I0122 11:12:49.496128 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-2-0" podStartSLOduration=2.175267634 podStartE2EDuration="2.496100731s" podCreationTimestamp="2026-01-22 11:12:47 +0000 UTC" firstStartedPulling="2026-01-22 11:12:48.087052781 +0000 UTC m=+2847.316995689" lastFinishedPulling="2026-01-22 11:12:48.407885878 +0000 UTC m=+2847.637828786" observedRunningTime="2026-01-22 11:12:49.4821339 +0000 UTC m=+2848.712076808" watchObservedRunningTime="2026-01-22 11:12:49.496100731 +0000 UTC m=+2848.726043659" Jan 22 11:12:50 crc kubenswrapper[4752]: I0122 11:12:50.406214 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"fc55cd03-d309-466c-b2a7-d57bdcc8690b","Type":"ContainerStarted","Data":"b1a3cc6c84ecdeb7dc9f8c159585e787e374c838394a3610ff085aa907d1c146"} Jan 22 11:12:50 crc kubenswrapper[4752]: I0122 11:12:50.406536 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"fc55cd03-d309-466c-b2a7-d57bdcc8690b","Type":"ContainerStarted","Data":"99d86a1abd2d816aa0a96f95c5d1d537393ce03580813c500cd02bb5014f1de2"} Jan 22 11:12:50 crc kubenswrapper[4752]: I0122 11:12:50.436819 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-0" podStartSLOduration=4.436800199 podStartE2EDuration="4.436800199s" podCreationTimestamp="2026-01-22 11:12:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:12:50.433103454 +0000 UTC m=+2849.663046362" watchObservedRunningTime="2026-01-22 11:12:50.436800199 +0000 UTC m=+2849.666743117" Jan 22 11:12:52 crc kubenswrapper[4752]: I0122 11:12:52.289386 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 22 11:12:52 crc kubenswrapper[4752]: I0122 11:12:52.356919 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:52 crc kubenswrapper[4752]: I0122 11:12:52.376482 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:57 crc kubenswrapper[4752]: I0122 11:12:57.515686 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 22 11:12:57 crc kubenswrapper[4752]: I0122 11:12:57.601515 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-2-0" Jan 22 11:12:57 crc kubenswrapper[4752]: I0122 11:12:57.629270 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-0" Jan 22 11:12:57 crc kubenswrapper[4752]: I0122 11:12:57.724033 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:12:57 crc kubenswrapper[4752]: I0122 11:12:57.724084 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:13:00 crc kubenswrapper[4752]: I0122 11:13:00.292485 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b4jmv"] Jan 22 11:13:00 crc kubenswrapper[4752]: I0122 11:13:00.295070 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b4jmv" Jan 22 11:13:00 crc kubenswrapper[4752]: I0122 11:13:00.309317 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b4jmv"] Jan 22 11:13:00 crc kubenswrapper[4752]: I0122 11:13:00.348039 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggfsh\" (UniqueName: \"kubernetes.io/projected/d1764a3b-afde-4cd6-9fec-5127bf9d410c-kube-api-access-ggfsh\") pod \"community-operators-b4jmv\" (UID: \"d1764a3b-afde-4cd6-9fec-5127bf9d410c\") " pod="openshift-marketplace/community-operators-b4jmv" Jan 22 11:13:00 crc kubenswrapper[4752]: I0122 11:13:00.348125 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1764a3b-afde-4cd6-9fec-5127bf9d410c-utilities\") pod \"community-operators-b4jmv\" (UID: \"d1764a3b-afde-4cd6-9fec-5127bf9d410c\") " pod="openshift-marketplace/community-operators-b4jmv" Jan 22 11:13:00 crc kubenswrapper[4752]: I0122 11:13:00.348206 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1764a3b-afde-4cd6-9fec-5127bf9d410c-catalog-content\") pod \"community-operators-b4jmv\" (UID: \"d1764a3b-afde-4cd6-9fec-5127bf9d410c\") " pod="openshift-marketplace/community-operators-b4jmv" Jan 22 11:13:00 crc kubenswrapper[4752]: I0122 11:13:00.450241 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggfsh\" (UniqueName: \"kubernetes.io/projected/d1764a3b-afde-4cd6-9fec-5127bf9d410c-kube-api-access-ggfsh\") pod \"community-operators-b4jmv\" (UID: \"d1764a3b-afde-4cd6-9fec-5127bf9d410c\") " pod="openshift-marketplace/community-operators-b4jmv" Jan 22 11:13:00 crc kubenswrapper[4752]: I0122 11:13:00.450333 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1764a3b-afde-4cd6-9fec-5127bf9d410c-utilities\") pod \"community-operators-b4jmv\" (UID: \"d1764a3b-afde-4cd6-9fec-5127bf9d410c\") " pod="openshift-marketplace/community-operators-b4jmv" Jan 22 11:13:00 crc kubenswrapper[4752]: I0122 11:13:00.450378 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1764a3b-afde-4cd6-9fec-5127bf9d410c-catalog-content\") pod \"community-operators-b4jmv\" (UID: \"d1764a3b-afde-4cd6-9fec-5127bf9d410c\") " pod="openshift-marketplace/community-operators-b4jmv" Jan 22 11:13:00 crc kubenswrapper[4752]: I0122 11:13:00.450977 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1764a3b-afde-4cd6-9fec-5127bf9d410c-catalog-content\") pod \"community-operators-b4jmv\" (UID: \"d1764a3b-afde-4cd6-9fec-5127bf9d410c\") " pod="openshift-marketplace/community-operators-b4jmv" Jan 22 11:13:00 crc kubenswrapper[4752]: I0122 11:13:00.451005 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1764a3b-afde-4cd6-9fec-5127bf9d410c-utilities\") pod \"community-operators-b4jmv\" (UID: \"d1764a3b-afde-4cd6-9fec-5127bf9d410c\") " pod="openshift-marketplace/community-operators-b4jmv" Jan 22 11:13:00 crc kubenswrapper[4752]: I0122 11:13:00.484019 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggfsh\" (UniqueName: \"kubernetes.io/projected/d1764a3b-afde-4cd6-9fec-5127bf9d410c-kube-api-access-ggfsh\") pod \"community-operators-b4jmv\" (UID: \"d1764a3b-afde-4cd6-9fec-5127bf9d410c\") " pod="openshift-marketplace/community-operators-b4jmv" Jan 22 11:13:00 crc kubenswrapper[4752]: I0122 11:13:00.652460 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b4jmv" Jan 22 11:13:01 crc kubenswrapper[4752]: I0122 11:13:01.197475 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b4jmv"] Jan 22 11:13:01 crc kubenswrapper[4752]: I0122 11:13:01.518442 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4jmv" event={"ID":"d1764a3b-afde-4cd6-9fec-5127bf9d410c","Type":"ContainerStarted","Data":"7f3b050d53e76cadd13fc09f8542903942222722e07b594b7aa9d8bffc303e8f"} Jan 22 11:13:01 crc kubenswrapper[4752]: I0122 11:13:01.518786 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4jmv" event={"ID":"d1764a3b-afde-4cd6-9fec-5127bf9d410c","Type":"ContainerStarted","Data":"678da1d736790d8b8193d9de905e1f86c946f15b52af4c6606a490bfd8500e65"} Jan 22 11:13:02 crc kubenswrapper[4752]: I0122 11:13:02.531585 4752 generic.go:334] "Generic (PLEG): container finished" podID="d1764a3b-afde-4cd6-9fec-5127bf9d410c" containerID="7f3b050d53e76cadd13fc09f8542903942222722e07b594b7aa9d8bffc303e8f" exitCode=0 Jan 22 11:13:02 crc kubenswrapper[4752]: I0122 11:13:02.531663 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4jmv" event={"ID":"d1764a3b-afde-4cd6-9fec-5127bf9d410c","Type":"ContainerDied","Data":"7f3b050d53e76cadd13fc09f8542903942222722e07b594b7aa9d8bffc303e8f"} Jan 22 11:13:03 crc kubenswrapper[4752]: I0122 11:13:03.497439 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bfnc6"] Jan 22 11:13:03 crc kubenswrapper[4752]: I0122 11:13:03.500677 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bfnc6" Jan 22 11:13:03 crc kubenswrapper[4752]: I0122 11:13:03.511768 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bfnc6"] Jan 22 11:13:03 crc kubenswrapper[4752]: I0122 11:13:03.629585 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61b3f880-7027-47ee-8833-f11ee6e127f8-catalog-content\") pod \"redhat-marketplace-bfnc6\" (UID: \"61b3f880-7027-47ee-8833-f11ee6e127f8\") " pod="openshift-marketplace/redhat-marketplace-bfnc6" Jan 22 11:13:03 crc kubenswrapper[4752]: I0122 11:13:03.629641 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2zbq\" (UniqueName: \"kubernetes.io/projected/61b3f880-7027-47ee-8833-f11ee6e127f8-kube-api-access-v2zbq\") pod \"redhat-marketplace-bfnc6\" (UID: \"61b3f880-7027-47ee-8833-f11ee6e127f8\") " pod="openshift-marketplace/redhat-marketplace-bfnc6" Jan 22 11:13:03 crc kubenswrapper[4752]: I0122 11:13:03.629781 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61b3f880-7027-47ee-8833-f11ee6e127f8-utilities\") pod \"redhat-marketplace-bfnc6\" (UID: \"61b3f880-7027-47ee-8833-f11ee6e127f8\") " pod="openshift-marketplace/redhat-marketplace-bfnc6" Jan 22 11:13:03 crc kubenswrapper[4752]: I0122 11:13:03.731561 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61b3f880-7027-47ee-8833-f11ee6e127f8-catalog-content\") pod \"redhat-marketplace-bfnc6\" (UID: \"61b3f880-7027-47ee-8833-f11ee6e127f8\") " pod="openshift-marketplace/redhat-marketplace-bfnc6" Jan 22 11:13:03 crc kubenswrapper[4752]: I0122 11:13:03.731610 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2zbq\" (UniqueName: \"kubernetes.io/projected/61b3f880-7027-47ee-8833-f11ee6e127f8-kube-api-access-v2zbq\") pod \"redhat-marketplace-bfnc6\" (UID: \"61b3f880-7027-47ee-8833-f11ee6e127f8\") " pod="openshift-marketplace/redhat-marketplace-bfnc6" Jan 22 11:13:03 crc kubenswrapper[4752]: I0122 11:13:03.731695 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61b3f880-7027-47ee-8833-f11ee6e127f8-utilities\") pod \"redhat-marketplace-bfnc6\" (UID: \"61b3f880-7027-47ee-8833-f11ee6e127f8\") " pod="openshift-marketplace/redhat-marketplace-bfnc6" Jan 22 11:13:03 crc kubenswrapper[4752]: I0122 11:13:03.732075 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61b3f880-7027-47ee-8833-f11ee6e127f8-catalog-content\") pod \"redhat-marketplace-bfnc6\" (UID: \"61b3f880-7027-47ee-8833-f11ee6e127f8\") " pod="openshift-marketplace/redhat-marketplace-bfnc6" Jan 22 11:13:03 crc kubenswrapper[4752]: I0122 11:13:03.732087 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61b3f880-7027-47ee-8833-f11ee6e127f8-utilities\") pod \"redhat-marketplace-bfnc6\" (UID: \"61b3f880-7027-47ee-8833-f11ee6e127f8\") " pod="openshift-marketplace/redhat-marketplace-bfnc6" Jan 22 11:13:03 crc kubenswrapper[4752]: I0122 11:13:03.750760 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2zbq\" (UniqueName: \"kubernetes.io/projected/61b3f880-7027-47ee-8833-f11ee6e127f8-kube-api-access-v2zbq\") pod \"redhat-marketplace-bfnc6\" (UID: \"61b3f880-7027-47ee-8833-f11ee6e127f8\") " pod="openshift-marketplace/redhat-marketplace-bfnc6" Jan 22 11:13:03 crc kubenswrapper[4752]: I0122 11:13:03.832727 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bfnc6" Jan 22 11:13:04 crc kubenswrapper[4752]: I0122 11:13:04.449165 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bfnc6"] Jan 22 11:13:04 crc kubenswrapper[4752]: W0122 11:13:04.450008 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61b3f880_7027_47ee_8833_f11ee6e127f8.slice/crio-3050baf6ffd6d6706cc7da763acf0958431d476cc0ccac41c47854a189d2785f WatchSource:0}: Error finding container 3050baf6ffd6d6706cc7da763acf0958431d476cc0ccac41c47854a189d2785f: Status 404 returned error can't find the container with id 3050baf6ffd6d6706cc7da763acf0958431d476cc0ccac41c47854a189d2785f Jan 22 11:13:04 crc kubenswrapper[4752]: I0122 11:13:04.570607 4752 generic.go:334] "Generic (PLEG): container finished" podID="d1764a3b-afde-4cd6-9fec-5127bf9d410c" containerID="9397954bf60fb3792ab7c62de49a50b36e16836a886a2c7f64d4d3be86f8bb51" exitCode=0 Jan 22 11:13:04 crc kubenswrapper[4752]: I0122 11:13:04.570708 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4jmv" event={"ID":"d1764a3b-afde-4cd6-9fec-5127bf9d410c","Type":"ContainerDied","Data":"9397954bf60fb3792ab7c62de49a50b36e16836a886a2c7f64d4d3be86f8bb51"} Jan 22 11:13:04 crc kubenswrapper[4752]: I0122 11:13:04.580187 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bfnc6" event={"ID":"61b3f880-7027-47ee-8833-f11ee6e127f8","Type":"ContainerStarted","Data":"3050baf6ffd6d6706cc7da763acf0958431d476cc0ccac41c47854a189d2785f"} Jan 22 11:13:05 crc kubenswrapper[4752]: I0122 11:13:05.595354 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4jmv" event={"ID":"d1764a3b-afde-4cd6-9fec-5127bf9d410c","Type":"ContainerStarted","Data":"7b9185c492cfc830fbce2f77be2118d6da25ac3c6bf6518587d1020c9f8638e4"} Jan 22 11:13:05 crc kubenswrapper[4752]: I0122 11:13:05.598237 4752 generic.go:334] "Generic (PLEG): container finished" podID="61b3f880-7027-47ee-8833-f11ee6e127f8" containerID="91ca9ab1d514010fd04ecfed62e95df8e06006a18997c16df149090ccdbc5343" exitCode=0 Jan 22 11:13:05 crc kubenswrapper[4752]: I0122 11:13:05.598304 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bfnc6" event={"ID":"61b3f880-7027-47ee-8833-f11ee6e127f8","Type":"ContainerDied","Data":"91ca9ab1d514010fd04ecfed62e95df8e06006a18997c16df149090ccdbc5343"} Jan 22 11:13:05 crc kubenswrapper[4752]: I0122 11:13:05.634026 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b4jmv" podStartSLOduration=3.187781767 podStartE2EDuration="5.634004464s" podCreationTimestamp="2026-01-22 11:13:00 +0000 UTC" firstStartedPulling="2026-01-22 11:13:02.534490884 +0000 UTC m=+2861.764433792" lastFinishedPulling="2026-01-22 11:13:04.980713581 +0000 UTC m=+2864.210656489" observedRunningTime="2026-01-22 11:13:05.622094867 +0000 UTC m=+2864.852037795" watchObservedRunningTime="2026-01-22 11:13:05.634004464 +0000 UTC m=+2864.863947382" Jan 22 11:13:07 crc kubenswrapper[4752]: I0122 11:13:07.636173 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bfnc6" event={"ID":"61b3f880-7027-47ee-8833-f11ee6e127f8","Type":"ContainerStarted","Data":"976354ee7627f99ce74f40033f71f8131fdaf8f989a855eb94f571189e87b47d"} Jan 22 11:13:08 crc kubenswrapper[4752]: I0122 11:13:08.650025 4752 generic.go:334] "Generic (PLEG): container finished" podID="61b3f880-7027-47ee-8833-f11ee6e127f8" containerID="976354ee7627f99ce74f40033f71f8131fdaf8f989a855eb94f571189e87b47d" exitCode=0 Jan 22 11:13:08 crc kubenswrapper[4752]: I0122 11:13:08.650193 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bfnc6" event={"ID":"61b3f880-7027-47ee-8833-f11ee6e127f8","Type":"ContainerDied","Data":"976354ee7627f99ce74f40033f71f8131fdaf8f989a855eb94f571189e87b47d"} Jan 22 11:13:09 crc kubenswrapper[4752]: I0122 11:13:09.668524 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bfnc6" event={"ID":"61b3f880-7027-47ee-8833-f11ee6e127f8","Type":"ContainerStarted","Data":"ac4e72c91682f3777a48369fcf1c2afc58b3f69f1e53ef0a035a4717f1bca13b"} Jan 22 11:13:09 crc kubenswrapper[4752]: I0122 11:13:09.700269 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bfnc6" podStartSLOduration=3.205885518 podStartE2EDuration="6.700250055s" podCreationTimestamp="2026-01-22 11:13:03 +0000 UTC" firstStartedPulling="2026-01-22 11:13:05.600534951 +0000 UTC m=+2864.830477859" lastFinishedPulling="2026-01-22 11:13:09.094899478 +0000 UTC m=+2868.324842396" observedRunningTime="2026-01-22 11:13:09.696987961 +0000 UTC m=+2868.926930889" watchObservedRunningTime="2026-01-22 11:13:09.700250055 +0000 UTC m=+2868.930192963" Jan 22 11:13:10 crc kubenswrapper[4752]: I0122 11:13:10.653263 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b4jmv" Jan 22 11:13:10 crc kubenswrapper[4752]: I0122 11:13:10.653325 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b4jmv" Jan 22 11:13:10 crc kubenswrapper[4752]: I0122 11:13:10.727005 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b4jmv" Jan 22 11:13:10 crc kubenswrapper[4752]: I0122 11:13:10.788903 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b4jmv" Jan 22 11:13:12 crc kubenswrapper[4752]: I0122 11:13:12.070585 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b4jmv"] Jan 22 11:13:12 crc kubenswrapper[4752]: I0122 11:13:12.697345 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b4jmv" podUID="d1764a3b-afde-4cd6-9fec-5127bf9d410c" containerName="registry-server" containerID="cri-o://7b9185c492cfc830fbce2f77be2118d6da25ac3c6bf6518587d1020c9f8638e4" gracePeriod=2 Jan 22 11:13:13 crc kubenswrapper[4752]: I0122 11:13:13.152840 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b4jmv" Jan 22 11:13:13 crc kubenswrapper[4752]: I0122 11:13:13.235478 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggfsh\" (UniqueName: \"kubernetes.io/projected/d1764a3b-afde-4cd6-9fec-5127bf9d410c-kube-api-access-ggfsh\") pod \"d1764a3b-afde-4cd6-9fec-5127bf9d410c\" (UID: \"d1764a3b-afde-4cd6-9fec-5127bf9d410c\") " Jan 22 11:13:13 crc kubenswrapper[4752]: I0122 11:13:13.235577 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1764a3b-afde-4cd6-9fec-5127bf9d410c-catalog-content\") pod \"d1764a3b-afde-4cd6-9fec-5127bf9d410c\" (UID: \"d1764a3b-afde-4cd6-9fec-5127bf9d410c\") " Jan 22 11:13:13 crc kubenswrapper[4752]: I0122 11:13:13.235724 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1764a3b-afde-4cd6-9fec-5127bf9d410c-utilities\") pod \"d1764a3b-afde-4cd6-9fec-5127bf9d410c\" (UID: \"d1764a3b-afde-4cd6-9fec-5127bf9d410c\") " Jan 22 11:13:13 crc kubenswrapper[4752]: I0122 11:13:13.238335 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1764a3b-afde-4cd6-9fec-5127bf9d410c-utilities" (OuterVolumeSpecName: "utilities") pod "d1764a3b-afde-4cd6-9fec-5127bf9d410c" (UID: "d1764a3b-afde-4cd6-9fec-5127bf9d410c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:13:13 crc kubenswrapper[4752]: I0122 11:13:13.244960 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1764a3b-afde-4cd6-9fec-5127bf9d410c-kube-api-access-ggfsh" (OuterVolumeSpecName: "kube-api-access-ggfsh") pod "d1764a3b-afde-4cd6-9fec-5127bf9d410c" (UID: "d1764a3b-afde-4cd6-9fec-5127bf9d410c"). InnerVolumeSpecName "kube-api-access-ggfsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:13:13 crc kubenswrapper[4752]: I0122 11:13:13.295539 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1764a3b-afde-4cd6-9fec-5127bf9d410c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1764a3b-afde-4cd6-9fec-5127bf9d410c" (UID: "d1764a3b-afde-4cd6-9fec-5127bf9d410c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:13:13 crc kubenswrapper[4752]: I0122 11:13:13.338689 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1764a3b-afde-4cd6-9fec-5127bf9d410c-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:13:13 crc kubenswrapper[4752]: I0122 11:13:13.338720 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggfsh\" (UniqueName: \"kubernetes.io/projected/d1764a3b-afde-4cd6-9fec-5127bf9d410c-kube-api-access-ggfsh\") on node \"crc\" DevicePath \"\"" Jan 22 11:13:13 crc kubenswrapper[4752]: I0122 11:13:13.338758 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1764a3b-afde-4cd6-9fec-5127bf9d410c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:13:13 crc kubenswrapper[4752]: I0122 11:13:13.716374 4752 generic.go:334] "Generic (PLEG): container finished" podID="d1764a3b-afde-4cd6-9fec-5127bf9d410c" containerID="7b9185c492cfc830fbce2f77be2118d6da25ac3c6bf6518587d1020c9f8638e4" exitCode=0 Jan 22 11:13:13 crc kubenswrapper[4752]: I0122 11:13:13.716418 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4jmv" event={"ID":"d1764a3b-afde-4cd6-9fec-5127bf9d410c","Type":"ContainerDied","Data":"7b9185c492cfc830fbce2f77be2118d6da25ac3c6bf6518587d1020c9f8638e4"} Jan 22 11:13:13 crc kubenswrapper[4752]: I0122 11:13:13.716446 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4jmv" event={"ID":"d1764a3b-afde-4cd6-9fec-5127bf9d410c","Type":"ContainerDied","Data":"678da1d736790d8b8193d9de905e1f86c946f15b52af4c6606a490bfd8500e65"} Jan 22 11:13:13 crc kubenswrapper[4752]: I0122 11:13:13.716469 4752 scope.go:117] "RemoveContainer" containerID="7b9185c492cfc830fbce2f77be2118d6da25ac3c6bf6518587d1020c9f8638e4" Jan 22 11:13:13 crc kubenswrapper[4752]: I0122 11:13:13.716486 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b4jmv" Jan 22 11:13:13 crc kubenswrapper[4752]: I0122 11:13:13.751702 4752 scope.go:117] "RemoveContainer" containerID="9397954bf60fb3792ab7c62de49a50b36e16836a886a2c7f64d4d3be86f8bb51" Jan 22 11:13:13 crc kubenswrapper[4752]: I0122 11:13:13.787831 4752 scope.go:117] "RemoveContainer" containerID="7f3b050d53e76cadd13fc09f8542903942222722e07b594b7aa9d8bffc303e8f" Jan 22 11:13:13 crc kubenswrapper[4752]: I0122 11:13:13.794667 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b4jmv"] Jan 22 11:13:13 crc kubenswrapper[4752]: I0122 11:13:13.805580 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b4jmv"] Jan 22 11:13:13 crc kubenswrapper[4752]: I0122 11:13:13.833143 4752 scope.go:117] "RemoveContainer" containerID="7b9185c492cfc830fbce2f77be2118d6da25ac3c6bf6518587d1020c9f8638e4" Jan 22 11:13:13 crc kubenswrapper[4752]: I0122 11:13:13.833167 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bfnc6" Jan 22 11:13:13 crc kubenswrapper[4752]: I0122 11:13:13.833204 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bfnc6" Jan 22 11:13:13 crc kubenswrapper[4752]: E0122 11:13:13.833784 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b9185c492cfc830fbce2f77be2118d6da25ac3c6bf6518587d1020c9f8638e4\": container with ID starting with 7b9185c492cfc830fbce2f77be2118d6da25ac3c6bf6518587d1020c9f8638e4 not found: ID does not exist" containerID="7b9185c492cfc830fbce2f77be2118d6da25ac3c6bf6518587d1020c9f8638e4" Jan 22 11:13:13 crc kubenswrapper[4752]: I0122 11:13:13.833828 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b9185c492cfc830fbce2f77be2118d6da25ac3c6bf6518587d1020c9f8638e4"} err="failed to get container status \"7b9185c492cfc830fbce2f77be2118d6da25ac3c6bf6518587d1020c9f8638e4\": rpc error: code = NotFound desc = could not find container \"7b9185c492cfc830fbce2f77be2118d6da25ac3c6bf6518587d1020c9f8638e4\": container with ID starting with 7b9185c492cfc830fbce2f77be2118d6da25ac3c6bf6518587d1020c9f8638e4 not found: ID does not exist" Jan 22 11:13:13 crc kubenswrapper[4752]: I0122 11:13:13.833902 4752 scope.go:117] "RemoveContainer" containerID="9397954bf60fb3792ab7c62de49a50b36e16836a886a2c7f64d4d3be86f8bb51" Jan 22 11:13:13 crc kubenswrapper[4752]: E0122 11:13:13.834284 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9397954bf60fb3792ab7c62de49a50b36e16836a886a2c7f64d4d3be86f8bb51\": container with ID starting with 9397954bf60fb3792ab7c62de49a50b36e16836a886a2c7f64d4d3be86f8bb51 not found: ID does not exist" containerID="9397954bf60fb3792ab7c62de49a50b36e16836a886a2c7f64d4d3be86f8bb51" Jan 22 11:13:13 crc kubenswrapper[4752]: I0122 11:13:13.834313 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9397954bf60fb3792ab7c62de49a50b36e16836a886a2c7f64d4d3be86f8bb51"} err="failed to get container status \"9397954bf60fb3792ab7c62de49a50b36e16836a886a2c7f64d4d3be86f8bb51\": rpc error: code = NotFound desc = could not find container \"9397954bf60fb3792ab7c62de49a50b36e16836a886a2c7f64d4d3be86f8bb51\": container with ID starting with 9397954bf60fb3792ab7c62de49a50b36e16836a886a2c7f64d4d3be86f8bb51 not found: ID does not exist" Jan 22 11:13:13 crc kubenswrapper[4752]: I0122 11:13:13.834327 4752 scope.go:117] "RemoveContainer" containerID="7f3b050d53e76cadd13fc09f8542903942222722e07b594b7aa9d8bffc303e8f" Jan 22 11:13:13 crc kubenswrapper[4752]: E0122 11:13:13.834557 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f3b050d53e76cadd13fc09f8542903942222722e07b594b7aa9d8bffc303e8f\": container with ID starting with 7f3b050d53e76cadd13fc09f8542903942222722e07b594b7aa9d8bffc303e8f not found: ID does not exist" containerID="7f3b050d53e76cadd13fc09f8542903942222722e07b594b7aa9d8bffc303e8f" Jan 22 11:13:13 crc kubenswrapper[4752]: I0122 11:13:13.834598 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f3b050d53e76cadd13fc09f8542903942222722e07b594b7aa9d8bffc303e8f"} err="failed to get container status \"7f3b050d53e76cadd13fc09f8542903942222722e07b594b7aa9d8bffc303e8f\": rpc error: code = NotFound desc = could not find container \"7f3b050d53e76cadd13fc09f8542903942222722e07b594b7aa9d8bffc303e8f\": container with ID starting with 7f3b050d53e76cadd13fc09f8542903942222722e07b594b7aa9d8bffc303e8f not found: ID does not exist" Jan 22 11:13:13 crc kubenswrapper[4752]: I0122 11:13:13.883846 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bfnc6" Jan 22 11:13:14 crc kubenswrapper[4752]: I0122 11:13:14.820188 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bfnc6" Jan 22 11:13:15 crc kubenswrapper[4752]: I0122 11:13:15.118680 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1764a3b-afde-4cd6-9fec-5127bf9d410c" path="/var/lib/kubelet/pods/d1764a3b-afde-4cd6-9fec-5127bf9d410c/volumes" Jan 22 11:13:16 crc kubenswrapper[4752]: I0122 11:13:16.281604 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bfnc6"] Jan 22 11:13:16 crc kubenswrapper[4752]: I0122 11:13:16.753677 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bfnc6" podUID="61b3f880-7027-47ee-8833-f11ee6e127f8" containerName="registry-server" containerID="cri-o://ac4e72c91682f3777a48369fcf1c2afc58b3f69f1e53ef0a035a4717f1bca13b" gracePeriod=2 Jan 22 11:13:17 crc kubenswrapper[4752]: I0122 11:13:17.210688 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bfnc6" Jan 22 11:13:17 crc kubenswrapper[4752]: I0122 11:13:17.327765 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2zbq\" (UniqueName: \"kubernetes.io/projected/61b3f880-7027-47ee-8833-f11ee6e127f8-kube-api-access-v2zbq\") pod \"61b3f880-7027-47ee-8833-f11ee6e127f8\" (UID: \"61b3f880-7027-47ee-8833-f11ee6e127f8\") " Jan 22 11:13:17 crc kubenswrapper[4752]: I0122 11:13:17.328104 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61b3f880-7027-47ee-8833-f11ee6e127f8-utilities\") pod \"61b3f880-7027-47ee-8833-f11ee6e127f8\" (UID: \"61b3f880-7027-47ee-8833-f11ee6e127f8\") " Jan 22 11:13:17 crc kubenswrapper[4752]: I0122 11:13:17.328154 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61b3f880-7027-47ee-8833-f11ee6e127f8-catalog-content\") pod \"61b3f880-7027-47ee-8833-f11ee6e127f8\" (UID: \"61b3f880-7027-47ee-8833-f11ee6e127f8\") " Jan 22 11:13:17 crc kubenswrapper[4752]: I0122 11:13:17.330597 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61b3f880-7027-47ee-8833-f11ee6e127f8-utilities" (OuterVolumeSpecName: "utilities") pod "61b3f880-7027-47ee-8833-f11ee6e127f8" (UID: "61b3f880-7027-47ee-8833-f11ee6e127f8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:13:17 crc kubenswrapper[4752]: I0122 11:13:17.338634 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61b3f880-7027-47ee-8833-f11ee6e127f8-kube-api-access-v2zbq" (OuterVolumeSpecName: "kube-api-access-v2zbq") pod "61b3f880-7027-47ee-8833-f11ee6e127f8" (UID: "61b3f880-7027-47ee-8833-f11ee6e127f8"). InnerVolumeSpecName "kube-api-access-v2zbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:13:17 crc kubenswrapper[4752]: I0122 11:13:17.378751 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61b3f880-7027-47ee-8833-f11ee6e127f8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61b3f880-7027-47ee-8833-f11ee6e127f8" (UID: "61b3f880-7027-47ee-8833-f11ee6e127f8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:13:17 crc kubenswrapper[4752]: I0122 11:13:17.431218 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61b3f880-7027-47ee-8833-f11ee6e127f8-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:13:17 crc kubenswrapper[4752]: I0122 11:13:17.431260 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61b3f880-7027-47ee-8833-f11ee6e127f8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:13:17 crc kubenswrapper[4752]: I0122 11:13:17.431276 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2zbq\" (UniqueName: \"kubernetes.io/projected/61b3f880-7027-47ee-8833-f11ee6e127f8-kube-api-access-v2zbq\") on node \"crc\" DevicePath \"\"" Jan 22 11:13:17 crc kubenswrapper[4752]: I0122 11:13:17.774218 4752 generic.go:334] "Generic (PLEG): container finished" podID="61b3f880-7027-47ee-8833-f11ee6e127f8" containerID="ac4e72c91682f3777a48369fcf1c2afc58b3f69f1e53ef0a035a4717f1bca13b" exitCode=0 Jan 22 11:13:17 crc kubenswrapper[4752]: I0122 11:13:17.774266 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bfnc6" event={"ID":"61b3f880-7027-47ee-8833-f11ee6e127f8","Type":"ContainerDied","Data":"ac4e72c91682f3777a48369fcf1c2afc58b3f69f1e53ef0a035a4717f1bca13b"} Jan 22 11:13:17 crc kubenswrapper[4752]: I0122 11:13:17.774318 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bfnc6" Jan 22 11:13:17 crc kubenswrapper[4752]: I0122 11:13:17.774328 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bfnc6" event={"ID":"61b3f880-7027-47ee-8833-f11ee6e127f8","Type":"ContainerDied","Data":"3050baf6ffd6d6706cc7da763acf0958431d476cc0ccac41c47854a189d2785f"} Jan 22 11:13:17 crc kubenswrapper[4752]: I0122 11:13:17.774358 4752 scope.go:117] "RemoveContainer" containerID="ac4e72c91682f3777a48369fcf1c2afc58b3f69f1e53ef0a035a4717f1bca13b" Jan 22 11:13:17 crc kubenswrapper[4752]: I0122 11:13:17.812534 4752 scope.go:117] "RemoveContainer" containerID="976354ee7627f99ce74f40033f71f8131fdaf8f989a855eb94f571189e87b47d" Jan 22 11:13:17 crc kubenswrapper[4752]: I0122 11:13:17.818823 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bfnc6"] Jan 22 11:13:17 crc kubenswrapper[4752]: I0122 11:13:17.839571 4752 scope.go:117] "RemoveContainer" containerID="91ca9ab1d514010fd04ecfed62e95df8e06006a18997c16df149090ccdbc5343" Jan 22 11:13:17 crc kubenswrapper[4752]: I0122 11:13:17.846373 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bfnc6"] Jan 22 11:13:17 crc kubenswrapper[4752]: I0122 11:13:17.894494 4752 scope.go:117] "RemoveContainer" containerID="ac4e72c91682f3777a48369fcf1c2afc58b3f69f1e53ef0a035a4717f1bca13b" Jan 22 11:13:17 crc kubenswrapper[4752]: E0122 11:13:17.895206 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac4e72c91682f3777a48369fcf1c2afc58b3f69f1e53ef0a035a4717f1bca13b\": container with ID starting with ac4e72c91682f3777a48369fcf1c2afc58b3f69f1e53ef0a035a4717f1bca13b not found: ID does not exist" containerID="ac4e72c91682f3777a48369fcf1c2afc58b3f69f1e53ef0a035a4717f1bca13b" Jan 22 11:13:17 crc kubenswrapper[4752]: I0122 11:13:17.895262 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac4e72c91682f3777a48369fcf1c2afc58b3f69f1e53ef0a035a4717f1bca13b"} err="failed to get container status \"ac4e72c91682f3777a48369fcf1c2afc58b3f69f1e53ef0a035a4717f1bca13b\": rpc error: code = NotFound desc = could not find container \"ac4e72c91682f3777a48369fcf1c2afc58b3f69f1e53ef0a035a4717f1bca13b\": container with ID starting with ac4e72c91682f3777a48369fcf1c2afc58b3f69f1e53ef0a035a4717f1bca13b not found: ID does not exist" Jan 22 11:13:17 crc kubenswrapper[4752]: I0122 11:13:17.895297 4752 scope.go:117] "RemoveContainer" containerID="976354ee7627f99ce74f40033f71f8131fdaf8f989a855eb94f571189e87b47d" Jan 22 11:13:17 crc kubenswrapper[4752]: E0122 11:13:17.895741 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"976354ee7627f99ce74f40033f71f8131fdaf8f989a855eb94f571189e87b47d\": container with ID starting with 976354ee7627f99ce74f40033f71f8131fdaf8f989a855eb94f571189e87b47d not found: ID does not exist" containerID="976354ee7627f99ce74f40033f71f8131fdaf8f989a855eb94f571189e87b47d" Jan 22 11:13:17 crc kubenswrapper[4752]: I0122 11:13:17.895778 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"976354ee7627f99ce74f40033f71f8131fdaf8f989a855eb94f571189e87b47d"} err="failed to get container status \"976354ee7627f99ce74f40033f71f8131fdaf8f989a855eb94f571189e87b47d\": rpc error: code = NotFound desc = could not find container \"976354ee7627f99ce74f40033f71f8131fdaf8f989a855eb94f571189e87b47d\": container with ID starting with 976354ee7627f99ce74f40033f71f8131fdaf8f989a855eb94f571189e87b47d not found: ID does not exist" Jan 22 11:13:17 crc kubenswrapper[4752]: I0122 11:13:17.895805 4752 scope.go:117] "RemoveContainer" containerID="91ca9ab1d514010fd04ecfed62e95df8e06006a18997c16df149090ccdbc5343" Jan 22 11:13:17 crc kubenswrapper[4752]: E0122 11:13:17.896600 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91ca9ab1d514010fd04ecfed62e95df8e06006a18997c16df149090ccdbc5343\": container with ID starting with 91ca9ab1d514010fd04ecfed62e95df8e06006a18997c16df149090ccdbc5343 not found: ID does not exist" containerID="91ca9ab1d514010fd04ecfed62e95df8e06006a18997c16df149090ccdbc5343" Jan 22 11:13:17 crc kubenswrapper[4752]: I0122 11:13:17.896646 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91ca9ab1d514010fd04ecfed62e95df8e06006a18997c16df149090ccdbc5343"} err="failed to get container status \"91ca9ab1d514010fd04ecfed62e95df8e06006a18997c16df149090ccdbc5343\": rpc error: code = NotFound desc = could not find container \"91ca9ab1d514010fd04ecfed62e95df8e06006a18997c16df149090ccdbc5343\": container with ID starting with 91ca9ab1d514010fd04ecfed62e95df8e06006a18997c16df149090ccdbc5343 not found: ID does not exist" Jan 22 11:13:19 crc kubenswrapper[4752]: I0122 11:13:19.113510 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61b3f880-7027-47ee-8833-f11ee6e127f8" path="/var/lib/kubelet/pods/61b3f880-7027-47ee-8833-f11ee6e127f8/volumes" Jan 22 11:13:27 crc kubenswrapper[4752]: I0122 11:13:27.725466 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:13:27 crc kubenswrapper[4752]: I0122 11:13:27.726177 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:13:50 crc kubenswrapper[4752]: I0122 11:13:50.938395 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 11:13:50 crc kubenswrapper[4752]: I0122 11:13:50.939339 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="7558d250-f7b6-49f0-90a1-b524e8b0d376" containerName="prometheus" containerID="cri-o://dd22454acf1ffdee5d505676f7bb5ead04cdd477a4699a5c67f10f46e5dc3731" gracePeriod=600 Jan 22 11:13:50 crc kubenswrapper[4752]: I0122 11:13:50.939823 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="7558d250-f7b6-49f0-90a1-b524e8b0d376" containerName="thanos-sidecar" containerID="cri-o://b335647437b507a0cd725ea0b3d0c9f4cb85f06c07d4624bae0c7576adfee0cf" gracePeriod=600 Jan 22 11:13:50 crc kubenswrapper[4752]: I0122 11:13:50.939932 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="7558d250-f7b6-49f0-90a1-b524e8b0d376" containerName="config-reloader" containerID="cri-o://280b41b08ad50fcf00b2a110365a3b6b55f1f50a7b6767e63264233cb177efa1" gracePeriod=600 Jan 22 11:13:51 crc kubenswrapper[4752]: I0122 11:13:51.146688 4752 generic.go:334] "Generic (PLEG): container finished" podID="7558d250-f7b6-49f0-90a1-b524e8b0d376" containerID="b335647437b507a0cd725ea0b3d0c9f4cb85f06c07d4624bae0c7576adfee0cf" exitCode=0 Jan 22 11:13:51 crc kubenswrapper[4752]: I0122 11:13:51.146714 4752 generic.go:334] "Generic (PLEG): container finished" podID="7558d250-f7b6-49f0-90a1-b524e8b0d376" containerID="dd22454acf1ffdee5d505676f7bb5ead04cdd477a4699a5c67f10f46e5dc3731" exitCode=0 Jan 22 11:13:51 crc kubenswrapper[4752]: I0122 11:13:51.146734 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7558d250-f7b6-49f0-90a1-b524e8b0d376","Type":"ContainerDied","Data":"b335647437b507a0cd725ea0b3d0c9f4cb85f06c07d4624bae0c7576adfee0cf"} Jan 22 11:13:51 crc kubenswrapper[4752]: I0122 11:13:51.146757 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7558d250-f7b6-49f0-90a1-b524e8b0d376","Type":"ContainerDied","Data":"dd22454acf1ffdee5d505676f7bb5ead04cdd477a4699a5c67f10f46e5dc3731"} Jan 22 11:13:51 crc kubenswrapper[4752]: I0122 11:13:51.957826 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.134925 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7558d250-f7b6-49f0-90a1-b524e8b0d376-prometheus-metric-storage-rulefiles-2\") pod \"7558d250-f7b6-49f0-90a1-b524e8b0d376\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.135198 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b945r\" (UniqueName: \"kubernetes.io/projected/7558d250-f7b6-49f0-90a1-b524e8b0d376-kube-api-access-b945r\") pod \"7558d250-f7b6-49f0-90a1-b524e8b0d376\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.135221 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-secret-combined-ca-bundle\") pod \"7558d250-f7b6-49f0-90a1-b524e8b0d376\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.135261 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-config\") pod \"7558d250-f7b6-49f0-90a1-b524e8b0d376\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.135285 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7558d250-f7b6-49f0-90a1-b524e8b0d376-prometheus-metric-storage-rulefiles-0\") pod \"7558d250-f7b6-49f0-90a1-b524e8b0d376\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.135334 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"7558d250-f7b6-49f0-90a1-b524e8b0d376\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.136080 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d\") pod \"7558d250-f7b6-49f0-90a1-b524e8b0d376\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.136151 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-web-config\") pod \"7558d250-f7b6-49f0-90a1-b524e8b0d376\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.136198 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7558d250-f7b6-49f0-90a1-b524e8b0d376-tls-assets\") pod \"7558d250-f7b6-49f0-90a1-b524e8b0d376\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.136234 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7558d250-f7b6-49f0-90a1-b524e8b0d376-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "7558d250-f7b6-49f0-90a1-b524e8b0d376" (UID: "7558d250-f7b6-49f0-90a1-b524e8b0d376"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.136254 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7558d250-f7b6-49f0-90a1-b524e8b0d376-config-out\") pod \"7558d250-f7b6-49f0-90a1-b524e8b0d376\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.136348 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"7558d250-f7b6-49f0-90a1-b524e8b0d376\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.136453 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-thanos-prometheus-http-client-file\") pod \"7558d250-f7b6-49f0-90a1-b524e8b0d376\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.136516 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7558d250-f7b6-49f0-90a1-b524e8b0d376-prometheus-metric-storage-rulefiles-1\") pod \"7558d250-f7b6-49f0-90a1-b524e8b0d376\" (UID: \"7558d250-f7b6-49f0-90a1-b524e8b0d376\") " Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.136896 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7558d250-f7b6-49f0-90a1-b524e8b0d376-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "7558d250-f7b6-49f0-90a1-b524e8b0d376" (UID: "7558d250-f7b6-49f0-90a1-b524e8b0d376"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.137347 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7558d250-f7b6-49f0-90a1-b524e8b0d376-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "7558d250-f7b6-49f0-90a1-b524e8b0d376" (UID: "7558d250-f7b6-49f0-90a1-b524e8b0d376"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.137591 4752 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7558d250-f7b6-49f0-90a1-b524e8b0d376-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.137656 4752 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7558d250-f7b6-49f0-90a1-b524e8b0d376-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.137673 4752 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7558d250-f7b6-49f0-90a1-b524e8b0d376-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.142078 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "7558d250-f7b6-49f0-90a1-b524e8b0d376" (UID: "7558d250-f7b6-49f0-90a1-b524e8b0d376"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.142894 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "7558d250-f7b6-49f0-90a1-b524e8b0d376" (UID: "7558d250-f7b6-49f0-90a1-b524e8b0d376"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.143492 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7558d250-f7b6-49f0-90a1-b524e8b0d376-config-out" (OuterVolumeSpecName: "config-out") pod "7558d250-f7b6-49f0-90a1-b524e8b0d376" (UID: "7558d250-f7b6-49f0-90a1-b524e8b0d376"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.143942 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "7558d250-f7b6-49f0-90a1-b524e8b0d376" (UID: "7558d250-f7b6-49f0-90a1-b524e8b0d376"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.145781 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7558d250-f7b6-49f0-90a1-b524e8b0d376-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "7558d250-f7b6-49f0-90a1-b524e8b0d376" (UID: "7558d250-f7b6-49f0-90a1-b524e8b0d376"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.148481 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "7558d250-f7b6-49f0-90a1-b524e8b0d376" (UID: "7558d250-f7b6-49f0-90a1-b524e8b0d376"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.149063 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7558d250-f7b6-49f0-90a1-b524e8b0d376-kube-api-access-b945r" (OuterVolumeSpecName: "kube-api-access-b945r") pod "7558d250-f7b6-49f0-90a1-b524e8b0d376" (UID: "7558d250-f7b6-49f0-90a1-b524e8b0d376"). InnerVolumeSpecName "kube-api-access-b945r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.149272 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-config" (OuterVolumeSpecName: "config") pod "7558d250-f7b6-49f0-90a1-b524e8b0d376" (UID: "7558d250-f7b6-49f0-90a1-b524e8b0d376"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.162916 4752 generic.go:334] "Generic (PLEG): container finished" podID="7558d250-f7b6-49f0-90a1-b524e8b0d376" containerID="280b41b08ad50fcf00b2a110365a3b6b55f1f50a7b6767e63264233cb177efa1" exitCode=0 Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.163035 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7558d250-f7b6-49f0-90a1-b524e8b0d376","Type":"ContainerDied","Data":"280b41b08ad50fcf00b2a110365a3b6b55f1f50a7b6767e63264233cb177efa1"} Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.163092 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7558d250-f7b6-49f0-90a1-b524e8b0d376","Type":"ContainerDied","Data":"92a305d2cb677965360eb6c928de812758c4132c6b406fbf0b837b9601be4214"} Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.163086 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.163120 4752 scope.go:117] "RemoveContainer" containerID="b335647437b507a0cd725ea0b3d0c9f4cb85f06c07d4624bae0c7576adfee0cf" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.189808 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "7558d250-f7b6-49f0-90a1-b524e8b0d376" (UID: "7558d250-f7b6-49f0-90a1-b524e8b0d376"). InnerVolumeSpecName "pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.240367 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b945r\" (UniqueName: \"kubernetes.io/projected/7558d250-f7b6-49f0-90a1-b524e8b0d376-kube-api-access-b945r\") on node \"crc\" DevicePath \"\"" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.240407 4752 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.240423 4752 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.240439 4752 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.240509 4752 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d\") on node \"crc\" " Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.240527 4752 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7558d250-f7b6-49f0-90a1-b524e8b0d376-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.240542 4752 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7558d250-f7b6-49f0-90a1-b524e8b0d376-config-out\") on node \"crc\" DevicePath \"\"" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.240556 4752 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.240573 4752 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.248957 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-web-config" (OuterVolumeSpecName: "web-config") pod "7558d250-f7b6-49f0-90a1-b524e8b0d376" (UID: "7558d250-f7b6-49f0-90a1-b524e8b0d376"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.284071 4752 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.284269 4752 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d") on node "crc" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.325168 4752 scope.go:117] "RemoveContainer" containerID="280b41b08ad50fcf00b2a110365a3b6b55f1f50a7b6767e63264233cb177efa1" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.342767 4752 reconciler_common.go:293] "Volume detached for volume \"pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d\") on node \"crc\" DevicePath \"\"" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.342810 4752 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7558d250-f7b6-49f0-90a1-b524e8b0d376-web-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.351656 4752 scope.go:117] "RemoveContainer" containerID="dd22454acf1ffdee5d505676f7bb5ead04cdd477a4699a5c67f10f46e5dc3731" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.373723 4752 scope.go:117] "RemoveContainer" containerID="49f3721381dd33065b70664b5f481a351b1348b0cf38cf24d8a8edf0510036d7" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.407668 4752 scope.go:117] "RemoveContainer" containerID="b335647437b507a0cd725ea0b3d0c9f4cb85f06c07d4624bae0c7576adfee0cf" Jan 22 11:13:52 crc kubenswrapper[4752]: E0122 11:13:52.408415 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b335647437b507a0cd725ea0b3d0c9f4cb85f06c07d4624bae0c7576adfee0cf\": container with ID starting with b335647437b507a0cd725ea0b3d0c9f4cb85f06c07d4624bae0c7576adfee0cf not found: ID does not exist" containerID="b335647437b507a0cd725ea0b3d0c9f4cb85f06c07d4624bae0c7576adfee0cf" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.408449 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b335647437b507a0cd725ea0b3d0c9f4cb85f06c07d4624bae0c7576adfee0cf"} err="failed to get container status \"b335647437b507a0cd725ea0b3d0c9f4cb85f06c07d4624bae0c7576adfee0cf\": rpc error: code = NotFound desc = could not find container \"b335647437b507a0cd725ea0b3d0c9f4cb85f06c07d4624bae0c7576adfee0cf\": container with ID starting with b335647437b507a0cd725ea0b3d0c9f4cb85f06c07d4624bae0c7576adfee0cf not found: ID does not exist" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.408470 4752 scope.go:117] "RemoveContainer" containerID="280b41b08ad50fcf00b2a110365a3b6b55f1f50a7b6767e63264233cb177efa1" Jan 22 11:13:52 crc kubenswrapper[4752]: E0122 11:13:52.408882 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"280b41b08ad50fcf00b2a110365a3b6b55f1f50a7b6767e63264233cb177efa1\": container with ID starting with 280b41b08ad50fcf00b2a110365a3b6b55f1f50a7b6767e63264233cb177efa1 not found: ID does not exist" containerID="280b41b08ad50fcf00b2a110365a3b6b55f1f50a7b6767e63264233cb177efa1" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.408920 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"280b41b08ad50fcf00b2a110365a3b6b55f1f50a7b6767e63264233cb177efa1"} err="failed to get container status \"280b41b08ad50fcf00b2a110365a3b6b55f1f50a7b6767e63264233cb177efa1\": rpc error: code = NotFound desc = could not find container \"280b41b08ad50fcf00b2a110365a3b6b55f1f50a7b6767e63264233cb177efa1\": container with ID starting with 280b41b08ad50fcf00b2a110365a3b6b55f1f50a7b6767e63264233cb177efa1 not found: ID does not exist" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.408947 4752 scope.go:117] "RemoveContainer" containerID="dd22454acf1ffdee5d505676f7bb5ead04cdd477a4699a5c67f10f46e5dc3731" Jan 22 11:13:52 crc kubenswrapper[4752]: E0122 11:13:52.409311 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd22454acf1ffdee5d505676f7bb5ead04cdd477a4699a5c67f10f46e5dc3731\": container with ID starting with dd22454acf1ffdee5d505676f7bb5ead04cdd477a4699a5c67f10f46e5dc3731 not found: ID does not exist" containerID="dd22454acf1ffdee5d505676f7bb5ead04cdd477a4699a5c67f10f46e5dc3731" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.409336 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd22454acf1ffdee5d505676f7bb5ead04cdd477a4699a5c67f10f46e5dc3731"} err="failed to get container status \"dd22454acf1ffdee5d505676f7bb5ead04cdd477a4699a5c67f10f46e5dc3731\": rpc error: code = NotFound desc = could not find container \"dd22454acf1ffdee5d505676f7bb5ead04cdd477a4699a5c67f10f46e5dc3731\": container with ID starting with dd22454acf1ffdee5d505676f7bb5ead04cdd477a4699a5c67f10f46e5dc3731 not found: ID does not exist" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.409350 4752 scope.go:117] "RemoveContainer" containerID="49f3721381dd33065b70664b5f481a351b1348b0cf38cf24d8a8edf0510036d7" Jan 22 11:13:52 crc kubenswrapper[4752]: E0122 11:13:52.409574 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49f3721381dd33065b70664b5f481a351b1348b0cf38cf24d8a8edf0510036d7\": container with ID starting with 49f3721381dd33065b70664b5f481a351b1348b0cf38cf24d8a8edf0510036d7 not found: ID does not exist" containerID="49f3721381dd33065b70664b5f481a351b1348b0cf38cf24d8a8edf0510036d7" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.409593 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49f3721381dd33065b70664b5f481a351b1348b0cf38cf24d8a8edf0510036d7"} err="failed to get container status \"49f3721381dd33065b70664b5f481a351b1348b0cf38cf24d8a8edf0510036d7\": rpc error: code = NotFound desc = could not find container \"49f3721381dd33065b70664b5f481a351b1348b0cf38cf24d8a8edf0510036d7\": container with ID starting with 49f3721381dd33065b70664b5f481a351b1348b0cf38cf24d8a8edf0510036d7 not found: ID does not exist" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.504992 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.518599 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.552843 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 11:13:52 crc kubenswrapper[4752]: E0122 11:13:52.553233 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7558d250-f7b6-49f0-90a1-b524e8b0d376" containerName="prometheus" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.553249 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="7558d250-f7b6-49f0-90a1-b524e8b0d376" containerName="prometheus" Jan 22 11:13:52 crc kubenswrapper[4752]: E0122 11:13:52.553278 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61b3f880-7027-47ee-8833-f11ee6e127f8" containerName="extract-content" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.553285 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="61b3f880-7027-47ee-8833-f11ee6e127f8" containerName="extract-content" Jan 22 11:13:52 crc kubenswrapper[4752]: E0122 11:13:52.553295 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61b3f880-7027-47ee-8833-f11ee6e127f8" containerName="extract-utilities" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.553302 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="61b3f880-7027-47ee-8833-f11ee6e127f8" containerName="extract-utilities" Jan 22 11:13:52 crc kubenswrapper[4752]: E0122 11:13:52.553317 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1764a3b-afde-4cd6-9fec-5127bf9d410c" containerName="registry-server" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.553323 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1764a3b-afde-4cd6-9fec-5127bf9d410c" containerName="registry-server" Jan 22 11:13:52 crc kubenswrapper[4752]: E0122 11:13:52.553333 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1764a3b-afde-4cd6-9fec-5127bf9d410c" containerName="extract-content" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.553338 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1764a3b-afde-4cd6-9fec-5127bf9d410c" containerName="extract-content" Jan 22 11:13:52 crc kubenswrapper[4752]: E0122 11:13:52.553350 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61b3f880-7027-47ee-8833-f11ee6e127f8" containerName="registry-server" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.553355 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="61b3f880-7027-47ee-8833-f11ee6e127f8" containerName="registry-server" Jan 22 11:13:52 crc kubenswrapper[4752]: E0122 11:13:52.553366 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7558d250-f7b6-49f0-90a1-b524e8b0d376" containerName="thanos-sidecar" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.553371 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="7558d250-f7b6-49f0-90a1-b524e8b0d376" containerName="thanos-sidecar" Jan 22 11:13:52 crc kubenswrapper[4752]: E0122 11:13:52.553384 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1764a3b-afde-4cd6-9fec-5127bf9d410c" containerName="extract-utilities" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.553390 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1764a3b-afde-4cd6-9fec-5127bf9d410c" containerName="extract-utilities" Jan 22 11:13:52 crc kubenswrapper[4752]: E0122 11:13:52.553399 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7558d250-f7b6-49f0-90a1-b524e8b0d376" containerName="init-config-reloader" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.553405 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="7558d250-f7b6-49f0-90a1-b524e8b0d376" containerName="init-config-reloader" Jan 22 11:13:52 crc kubenswrapper[4752]: E0122 11:13:52.553418 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7558d250-f7b6-49f0-90a1-b524e8b0d376" containerName="config-reloader" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.553424 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="7558d250-f7b6-49f0-90a1-b524e8b0d376" containerName="config-reloader" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.553579 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1764a3b-afde-4cd6-9fec-5127bf9d410c" containerName="registry-server" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.553597 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="7558d250-f7b6-49f0-90a1-b524e8b0d376" containerName="thanos-sidecar" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.553609 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="7558d250-f7b6-49f0-90a1-b524e8b0d376" containerName="prometheus" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.553616 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="61b3f880-7027-47ee-8833-f11ee6e127f8" containerName="registry-server" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.553627 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="7558d250-f7b6-49f0-90a1-b524e8b0d376" containerName="config-reloader" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.555436 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.558016 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-n6xwj" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.558310 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.558524 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.558698 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.560019 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.560203 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.561313 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.569158 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.583469 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.650319 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.650373 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.650406 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.650440 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.650471 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.650633 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.650695 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.650724 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.650875 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-config\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.650901 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.650932 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmc8x\" (UniqueName: \"kubernetes.io/projected/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-kube-api-access-pmc8x\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.651156 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.651328 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.753050 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.753545 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.753799 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.753849 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.753918 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.753961 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.754001 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.754036 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.754107 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.754951 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-config\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.754992 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmc8x\" (UniqueName: \"kubernetes.io/projected/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-kube-api-access-pmc8x\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.755046 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.755115 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.755543 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.755999 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.756260 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.758606 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.760692 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.761822 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.762335 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-config\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.761484 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.765311 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.768074 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.768981 4752 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.769014 4752 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d012c5afc253dfb5bb1585a9f32cbc4589affd7948918f5a8ea0a0a38ad6626e/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.774312 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.780504 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmc8x\" (UniqueName: \"kubernetes.io/projected/d6f7044f-3712-41ce-88d1-25e80ba5a0bd-kube-api-access-pmc8x\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.812826 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7cc3c0c-bce3-4a44-b8b4-d93236cf0d9d\") pod \"prometheus-metric-storage-0\" (UID: \"d6f7044f-3712-41ce-88d1-25e80ba5a0bd\") " pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:52 crc kubenswrapper[4752]: I0122 11:13:52.880018 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 22 11:13:53 crc kubenswrapper[4752]: I0122 11:13:53.117369 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7558d250-f7b6-49f0-90a1-b524e8b0d376" path="/var/lib/kubelet/pods/7558d250-f7b6-49f0-90a1-b524e8b0d376/volumes" Jan 22 11:13:53 crc kubenswrapper[4752]: I0122 11:13:53.386285 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 22 11:13:54 crc kubenswrapper[4752]: I0122 11:13:54.191713 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d6f7044f-3712-41ce-88d1-25e80ba5a0bd","Type":"ContainerStarted","Data":"f61f6dadd08d23d216cefd2fd3175afaad7e40940cedb6372510a0649eebab1f"} Jan 22 11:13:57 crc kubenswrapper[4752]: I0122 11:13:57.723560 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:13:57 crc kubenswrapper[4752]: I0122 11:13:57.724457 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:13:57 crc kubenswrapper[4752]: I0122 11:13:57.724538 4752 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 11:13:57 crc kubenswrapper[4752]: I0122 11:13:57.726011 4752 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a3a05878df63bd7422e6a5f85a14142398ffaecda9556b5a85b69b9b9c85dc24"} pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 11:13:57 crc kubenswrapper[4752]: I0122 11:13:57.726163 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" containerID="cri-o://a3a05878df63bd7422e6a5f85a14142398ffaecda9556b5a85b69b9b9c85dc24" gracePeriod=600 Jan 22 11:13:58 crc kubenswrapper[4752]: I0122 11:13:58.244727 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d6f7044f-3712-41ce-88d1-25e80ba5a0bd","Type":"ContainerStarted","Data":"49b116f000ec8507028ee41ecca336b7f5c0f3fecae8621ce172dc575a5e9e44"} Jan 22 11:13:58 crc kubenswrapper[4752]: I0122 11:13:58.248584 4752 generic.go:334] "Generic (PLEG): container finished" podID="eb8df70c-9474-4827-8831-f39fc6883d79" containerID="a3a05878df63bd7422e6a5f85a14142398ffaecda9556b5a85b69b9b9c85dc24" exitCode=0 Jan 22 11:13:58 crc kubenswrapper[4752]: I0122 11:13:58.248623 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerDied","Data":"a3a05878df63bd7422e6a5f85a14142398ffaecda9556b5a85b69b9b9c85dc24"} Jan 22 11:13:58 crc kubenswrapper[4752]: I0122 11:13:58.248646 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerStarted","Data":"4f53b6143a6540349d0b24f907f48d8480e7e4207509ebd2fe7296fdefe4ae8b"} Jan 22 11:13:58 crc kubenswrapper[4752]: I0122 11:13:58.248665 4752 scope.go:117] "RemoveContainer" containerID="d8d4a0367b441aa318ea13a556d1d9c716143e5abe9c51339a1b6127faa71c61" Jan 22 11:14:07 crc kubenswrapper[4752]: I0122 11:14:07.362457 4752 generic.go:334] "Generic (PLEG): container finished" podID="d6f7044f-3712-41ce-88d1-25e80ba5a0bd" containerID="49b116f000ec8507028ee41ecca336b7f5c0f3fecae8621ce172dc575a5e9e44" exitCode=0 Jan 22 11:14:07 crc kubenswrapper[4752]: I0122 11:14:07.362544 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d6f7044f-3712-41ce-88d1-25e80ba5a0bd","Type":"ContainerDied","Data":"49b116f000ec8507028ee41ecca336b7f5c0f3fecae8621ce172dc575a5e9e44"} Jan 22 11:14:08 crc kubenswrapper[4752]: I0122 11:14:08.379535 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d6f7044f-3712-41ce-88d1-25e80ba5a0bd","Type":"ContainerStarted","Data":"2b34ccb4df7d873e45d2c3eca97485edd776675853c7a8b2fe6e0db9549712e8"} Jan 22 11:14:12 crc kubenswrapper[4752]: I0122 11:14:12.426433 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d6f7044f-3712-41ce-88d1-25e80ba5a0bd","Type":"ContainerStarted","Data":"968c5ba2d8336810d4dfd9663a1244178de6b969ac3c74cfade38b1145a6dd7f"} Jan 22 11:14:12 crc kubenswrapper[4752]: I0122 11:14:12.426837 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d6f7044f-3712-41ce-88d1-25e80ba5a0bd","Type":"ContainerStarted","Data":"6300a8a99fd6d8b014792ae35ed23f53019f6b638ef30684b9281d3130d2fbf1"} Jan 22 11:14:12 crc kubenswrapper[4752]: I0122 11:14:12.475534 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=20.475513457 podStartE2EDuration="20.475513457s" podCreationTimestamp="2026-01-22 11:13:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:14:12.467147269 +0000 UTC m=+2931.697090207" watchObservedRunningTime="2026-01-22 11:14:12.475513457 +0000 UTC m=+2931.705456385" Jan 22 11:14:12 crc kubenswrapper[4752]: I0122 11:14:12.880590 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 22 11:14:22 crc kubenswrapper[4752]: I0122 11:14:22.881213 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 22 11:14:22 crc kubenswrapper[4752]: I0122 11:14:22.887646 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 22 11:14:23 crc kubenswrapper[4752]: I0122 11:14:23.543611 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.522934 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.524648 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.528145 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-wf56n" Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.528186 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.528541 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.529639 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.538888 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.617410 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/81cf3443-216b-414b-830b-2747c7ddcd78-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"81cf3443-216b-414b-830b-2747c7ddcd78\") " pod="openstack/tempest-tests-tempest" Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.617489 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/81cf3443-216b-414b-830b-2747c7ddcd78-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"81cf3443-216b-414b-830b-2747c7ddcd78\") " pod="openstack/tempest-tests-tempest" Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.617535 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/81cf3443-216b-414b-830b-2747c7ddcd78-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"81cf3443-216b-414b-830b-2747c7ddcd78\") " pod="openstack/tempest-tests-tempest" Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.617612 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/81cf3443-216b-414b-830b-2747c7ddcd78-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"81cf3443-216b-414b-830b-2747c7ddcd78\") " pod="openstack/tempest-tests-tempest" Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.617688 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"81cf3443-216b-414b-830b-2747c7ddcd78\") " pod="openstack/tempest-tests-tempest" Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.617731 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/81cf3443-216b-414b-830b-2747c7ddcd78-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"81cf3443-216b-414b-830b-2747c7ddcd78\") " pod="openstack/tempest-tests-tempest" Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.617901 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81cf3443-216b-414b-830b-2747c7ddcd78-config-data\") pod \"tempest-tests-tempest\" (UID: \"81cf3443-216b-414b-830b-2747c7ddcd78\") " pod="openstack/tempest-tests-tempest" Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.617930 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vchcp\" (UniqueName: \"kubernetes.io/projected/81cf3443-216b-414b-830b-2747c7ddcd78-kube-api-access-vchcp\") pod \"tempest-tests-tempest\" (UID: \"81cf3443-216b-414b-830b-2747c7ddcd78\") " pod="openstack/tempest-tests-tempest" Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.618073 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81cf3443-216b-414b-830b-2747c7ddcd78-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"81cf3443-216b-414b-830b-2747c7ddcd78\") " pod="openstack/tempest-tests-tempest" Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.720028 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81cf3443-216b-414b-830b-2747c7ddcd78-config-data\") pod \"tempest-tests-tempest\" (UID: \"81cf3443-216b-414b-830b-2747c7ddcd78\") " pod="openstack/tempest-tests-tempest" Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.720082 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vchcp\" (UniqueName: \"kubernetes.io/projected/81cf3443-216b-414b-830b-2747c7ddcd78-kube-api-access-vchcp\") pod \"tempest-tests-tempest\" (UID: \"81cf3443-216b-414b-830b-2747c7ddcd78\") " pod="openstack/tempest-tests-tempest" Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.720133 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81cf3443-216b-414b-830b-2747c7ddcd78-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"81cf3443-216b-414b-830b-2747c7ddcd78\") " pod="openstack/tempest-tests-tempest" Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.720174 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/81cf3443-216b-414b-830b-2747c7ddcd78-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"81cf3443-216b-414b-830b-2747c7ddcd78\") " pod="openstack/tempest-tests-tempest" Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.720193 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/81cf3443-216b-414b-830b-2747c7ddcd78-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"81cf3443-216b-414b-830b-2747c7ddcd78\") " pod="openstack/tempest-tests-tempest" Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.720215 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/81cf3443-216b-414b-830b-2747c7ddcd78-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"81cf3443-216b-414b-830b-2747c7ddcd78\") " pod="openstack/tempest-tests-tempest" Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.720252 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/81cf3443-216b-414b-830b-2747c7ddcd78-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"81cf3443-216b-414b-830b-2747c7ddcd78\") " pod="openstack/tempest-tests-tempest" Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.720282 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"81cf3443-216b-414b-830b-2747c7ddcd78\") " pod="openstack/tempest-tests-tempest" Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.720312 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/81cf3443-216b-414b-830b-2747c7ddcd78-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"81cf3443-216b-414b-830b-2747c7ddcd78\") " pod="openstack/tempest-tests-tempest" Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.721025 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/81cf3443-216b-414b-830b-2747c7ddcd78-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"81cf3443-216b-414b-830b-2747c7ddcd78\") " pod="openstack/tempest-tests-tempest" Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.721525 4752 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"81cf3443-216b-414b-830b-2747c7ddcd78\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/tempest-tests-tempest" Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.721579 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/81cf3443-216b-414b-830b-2747c7ddcd78-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"81cf3443-216b-414b-830b-2747c7ddcd78\") " pod="openstack/tempest-tests-tempest" Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.721834 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/81cf3443-216b-414b-830b-2747c7ddcd78-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"81cf3443-216b-414b-830b-2747c7ddcd78\") " pod="openstack/tempest-tests-tempest" Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.721978 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81cf3443-216b-414b-830b-2747c7ddcd78-config-data\") pod \"tempest-tests-tempest\" (UID: \"81cf3443-216b-414b-830b-2747c7ddcd78\") " pod="openstack/tempest-tests-tempest" Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.727120 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81cf3443-216b-414b-830b-2747c7ddcd78-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"81cf3443-216b-414b-830b-2747c7ddcd78\") " pod="openstack/tempest-tests-tempest" Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.739768 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/81cf3443-216b-414b-830b-2747c7ddcd78-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"81cf3443-216b-414b-830b-2747c7ddcd78\") " pod="openstack/tempest-tests-tempest" Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.746173 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/81cf3443-216b-414b-830b-2747c7ddcd78-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"81cf3443-216b-414b-830b-2747c7ddcd78\") " pod="openstack/tempest-tests-tempest" Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.753825 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vchcp\" (UniqueName: \"kubernetes.io/projected/81cf3443-216b-414b-830b-2747c7ddcd78-kube-api-access-vchcp\") pod \"tempest-tests-tempest\" (UID: \"81cf3443-216b-414b-830b-2747c7ddcd78\") " pod="openstack/tempest-tests-tempest" Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.808182 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"81cf3443-216b-414b-830b-2747c7ddcd78\") " pod="openstack/tempest-tests-tempest" Jan 22 11:14:46 crc kubenswrapper[4752]: I0122 11:14:46.862135 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 22 11:14:47 crc kubenswrapper[4752]: I0122 11:14:47.402914 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 22 11:14:47 crc kubenswrapper[4752]: I0122 11:14:47.412099 4752 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 11:14:47 crc kubenswrapper[4752]: I0122 11:14:47.794798 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"81cf3443-216b-414b-830b-2747c7ddcd78","Type":"ContainerStarted","Data":"49f4fdd21c1ea51253120dacef1d967a6f43f8d8ddfcbfcf10a2dae3e7b46005"} Jan 22 11:15:00 crc kubenswrapper[4752]: I0122 11:15:00.144321 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484675-mmbvs"] Jan 22 11:15:00 crc kubenswrapper[4752]: I0122 11:15:00.146710 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484675-mmbvs" Jan 22 11:15:00 crc kubenswrapper[4752]: I0122 11:15:00.149050 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 11:15:00 crc kubenswrapper[4752]: I0122 11:15:00.150186 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 11:15:00 crc kubenswrapper[4752]: I0122 11:15:00.155076 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484675-mmbvs"] Jan 22 11:15:00 crc kubenswrapper[4752]: I0122 11:15:00.243320 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk72c\" (UniqueName: \"kubernetes.io/projected/1d32bf1f-86e9-4c48-8dbd-54ee831d235c-kube-api-access-tk72c\") pod \"collect-profiles-29484675-mmbvs\" (UID: \"1d32bf1f-86e9-4c48-8dbd-54ee831d235c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484675-mmbvs" Jan 22 11:15:00 crc kubenswrapper[4752]: I0122 11:15:00.243404 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d32bf1f-86e9-4c48-8dbd-54ee831d235c-config-volume\") pod \"collect-profiles-29484675-mmbvs\" (UID: \"1d32bf1f-86e9-4c48-8dbd-54ee831d235c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484675-mmbvs" Jan 22 11:15:00 crc kubenswrapper[4752]: I0122 11:15:00.243500 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d32bf1f-86e9-4c48-8dbd-54ee831d235c-secret-volume\") pod \"collect-profiles-29484675-mmbvs\" (UID: \"1d32bf1f-86e9-4c48-8dbd-54ee831d235c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484675-mmbvs" Jan 22 11:15:00 crc kubenswrapper[4752]: I0122 11:15:00.346024 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk72c\" (UniqueName: \"kubernetes.io/projected/1d32bf1f-86e9-4c48-8dbd-54ee831d235c-kube-api-access-tk72c\") pod \"collect-profiles-29484675-mmbvs\" (UID: \"1d32bf1f-86e9-4c48-8dbd-54ee831d235c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484675-mmbvs" Jan 22 11:15:00 crc kubenswrapper[4752]: I0122 11:15:00.346362 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d32bf1f-86e9-4c48-8dbd-54ee831d235c-config-volume\") pod \"collect-profiles-29484675-mmbvs\" (UID: \"1d32bf1f-86e9-4c48-8dbd-54ee831d235c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484675-mmbvs" Jan 22 11:15:00 crc kubenswrapper[4752]: I0122 11:15:00.346510 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d32bf1f-86e9-4c48-8dbd-54ee831d235c-secret-volume\") pod \"collect-profiles-29484675-mmbvs\" (UID: \"1d32bf1f-86e9-4c48-8dbd-54ee831d235c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484675-mmbvs" Jan 22 11:15:00 crc kubenswrapper[4752]: I0122 11:15:00.347187 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d32bf1f-86e9-4c48-8dbd-54ee831d235c-config-volume\") pod \"collect-profiles-29484675-mmbvs\" (UID: \"1d32bf1f-86e9-4c48-8dbd-54ee831d235c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484675-mmbvs" Jan 22 11:15:00 crc kubenswrapper[4752]: I0122 11:15:00.371548 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk72c\" (UniqueName: \"kubernetes.io/projected/1d32bf1f-86e9-4c48-8dbd-54ee831d235c-kube-api-access-tk72c\") pod \"collect-profiles-29484675-mmbvs\" (UID: \"1d32bf1f-86e9-4c48-8dbd-54ee831d235c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484675-mmbvs" Jan 22 11:15:00 crc kubenswrapper[4752]: I0122 11:15:00.383908 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d32bf1f-86e9-4c48-8dbd-54ee831d235c-secret-volume\") pod \"collect-profiles-29484675-mmbvs\" (UID: \"1d32bf1f-86e9-4c48-8dbd-54ee831d235c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484675-mmbvs" Jan 22 11:15:00 crc kubenswrapper[4752]: I0122 11:15:00.476256 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484675-mmbvs" Jan 22 11:15:00 crc kubenswrapper[4752]: I0122 11:15:00.963700 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"81cf3443-216b-414b-830b-2747c7ddcd78","Type":"ContainerStarted","Data":"a22feb5a4f46a52968dcbda5589203f521725930f89046b2e9121727f2cb4930"} Jan 22 11:15:00 crc kubenswrapper[4752]: I0122 11:15:00.993747 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484675-mmbvs"] Jan 22 11:15:00 crc kubenswrapper[4752]: I0122 11:15:00.994324 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.129624337 podStartE2EDuration="15.994315645s" podCreationTimestamp="2026-01-22 11:14:45 +0000 UTC" firstStartedPulling="2026-01-22 11:14:47.410015118 +0000 UTC m=+2966.639958036" lastFinishedPulling="2026-01-22 11:14:59.274706426 +0000 UTC m=+2978.504649344" observedRunningTime="2026-01-22 11:15:00.988416202 +0000 UTC m=+2980.218359120" watchObservedRunningTime="2026-01-22 11:15:00.994315645 +0000 UTC m=+2980.224258543" Jan 22 11:15:01 crc kubenswrapper[4752]: W0122 11:15:01.014121 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d32bf1f_86e9_4c48_8dbd_54ee831d235c.slice/crio-2df20680c5b36e015a2d06903ba242514d02e11fdcb9dc3b90594b64da07e7f9 WatchSource:0}: Error finding container 2df20680c5b36e015a2d06903ba242514d02e11fdcb9dc3b90594b64da07e7f9: Status 404 returned error can't find the container with id 2df20680c5b36e015a2d06903ba242514d02e11fdcb9dc3b90594b64da07e7f9 Jan 22 11:15:01 crc kubenswrapper[4752]: I0122 11:15:01.974348 4752 generic.go:334] "Generic (PLEG): container finished" podID="1d32bf1f-86e9-4c48-8dbd-54ee831d235c" containerID="96f2a1aa59972e4fb850775573b797df2c4d7369a0e1bed1b5506d5d3adbfdc2" exitCode=0 Jan 22 11:15:01 crc kubenswrapper[4752]: I0122 11:15:01.974569 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484675-mmbvs" event={"ID":"1d32bf1f-86e9-4c48-8dbd-54ee831d235c","Type":"ContainerDied","Data":"96f2a1aa59972e4fb850775573b797df2c4d7369a0e1bed1b5506d5d3adbfdc2"} Jan 22 11:15:01 crc kubenswrapper[4752]: I0122 11:15:01.975001 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484675-mmbvs" event={"ID":"1d32bf1f-86e9-4c48-8dbd-54ee831d235c","Type":"ContainerStarted","Data":"2df20680c5b36e015a2d06903ba242514d02e11fdcb9dc3b90594b64da07e7f9"} Jan 22 11:15:03 crc kubenswrapper[4752]: I0122 11:15:03.350917 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484675-mmbvs" Jan 22 11:15:03 crc kubenswrapper[4752]: I0122 11:15:03.412127 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk72c\" (UniqueName: \"kubernetes.io/projected/1d32bf1f-86e9-4c48-8dbd-54ee831d235c-kube-api-access-tk72c\") pod \"1d32bf1f-86e9-4c48-8dbd-54ee831d235c\" (UID: \"1d32bf1f-86e9-4c48-8dbd-54ee831d235c\") " Jan 22 11:15:03 crc kubenswrapper[4752]: I0122 11:15:03.412398 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d32bf1f-86e9-4c48-8dbd-54ee831d235c-secret-volume\") pod \"1d32bf1f-86e9-4c48-8dbd-54ee831d235c\" (UID: \"1d32bf1f-86e9-4c48-8dbd-54ee831d235c\") " Jan 22 11:15:03 crc kubenswrapper[4752]: I0122 11:15:03.412435 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d32bf1f-86e9-4c48-8dbd-54ee831d235c-config-volume\") pod \"1d32bf1f-86e9-4c48-8dbd-54ee831d235c\" (UID: \"1d32bf1f-86e9-4c48-8dbd-54ee831d235c\") " Jan 22 11:15:03 crc kubenswrapper[4752]: I0122 11:15:03.413415 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d32bf1f-86e9-4c48-8dbd-54ee831d235c-config-volume" (OuterVolumeSpecName: "config-volume") pod "1d32bf1f-86e9-4c48-8dbd-54ee831d235c" (UID: "1d32bf1f-86e9-4c48-8dbd-54ee831d235c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:15:03 crc kubenswrapper[4752]: I0122 11:15:03.421546 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d32bf1f-86e9-4c48-8dbd-54ee831d235c-kube-api-access-tk72c" (OuterVolumeSpecName: "kube-api-access-tk72c") pod "1d32bf1f-86e9-4c48-8dbd-54ee831d235c" (UID: "1d32bf1f-86e9-4c48-8dbd-54ee831d235c"). InnerVolumeSpecName "kube-api-access-tk72c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:15:03 crc kubenswrapper[4752]: I0122 11:15:03.424193 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d32bf1f-86e9-4c48-8dbd-54ee831d235c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1d32bf1f-86e9-4c48-8dbd-54ee831d235c" (UID: "1d32bf1f-86e9-4c48-8dbd-54ee831d235c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:15:03 crc kubenswrapper[4752]: I0122 11:15:03.515161 4752 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1d32bf1f-86e9-4c48-8dbd-54ee831d235c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 11:15:03 crc kubenswrapper[4752]: I0122 11:15:03.515209 4752 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d32bf1f-86e9-4c48-8dbd-54ee831d235c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 11:15:03 crc kubenswrapper[4752]: I0122 11:15:03.515224 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk72c\" (UniqueName: \"kubernetes.io/projected/1d32bf1f-86e9-4c48-8dbd-54ee831d235c-kube-api-access-tk72c\") on node \"crc\" DevicePath \"\"" Jan 22 11:15:04 crc kubenswrapper[4752]: I0122 11:15:04.006020 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484675-mmbvs" event={"ID":"1d32bf1f-86e9-4c48-8dbd-54ee831d235c","Type":"ContainerDied","Data":"2df20680c5b36e015a2d06903ba242514d02e11fdcb9dc3b90594b64da07e7f9"} Jan 22 11:15:04 crc kubenswrapper[4752]: I0122 11:15:04.006899 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2df20680c5b36e015a2d06903ba242514d02e11fdcb9dc3b90594b64da07e7f9" Jan 22 11:15:04 crc kubenswrapper[4752]: I0122 11:15:04.006077 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484675-mmbvs" Jan 22 11:15:04 crc kubenswrapper[4752]: I0122 11:15:04.459925 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484630-v8tfk"] Jan 22 11:15:04 crc kubenswrapper[4752]: I0122 11:15:04.471752 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484630-v8tfk"] Jan 22 11:15:05 crc kubenswrapper[4752]: I0122 11:15:05.115213 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cc1f620-50fd-495f-b9e0-7e676820eece" path="/var/lib/kubelet/pods/6cc1f620-50fd-495f-b9e0-7e676820eece/volumes" Jan 22 11:15:46 crc kubenswrapper[4752]: I0122 11:15:46.886977 4752 scope.go:117] "RemoveContainer" containerID="fdf7aa8f30242e2ab0b897febf79c68e7d84447c50caa63138a01f75ba6f3a6d" Jan 22 11:16:27 crc kubenswrapper[4752]: I0122 11:16:27.723398 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:16:27 crc kubenswrapper[4752]: I0122 11:16:27.724007 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:16:57 crc kubenswrapper[4752]: I0122 11:16:57.724239 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:16:57 crc kubenswrapper[4752]: I0122 11:16:57.724844 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:17:27 crc kubenswrapper[4752]: I0122 11:17:27.724086 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:17:27 crc kubenswrapper[4752]: I0122 11:17:27.724712 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:17:27 crc kubenswrapper[4752]: I0122 11:17:27.724756 4752 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 11:17:27 crc kubenswrapper[4752]: I0122 11:17:27.725540 4752 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4f53b6143a6540349d0b24f907f48d8480e7e4207509ebd2fe7296fdefe4ae8b"} pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 11:17:27 crc kubenswrapper[4752]: I0122 11:17:27.725594 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" containerID="cri-o://4f53b6143a6540349d0b24f907f48d8480e7e4207509ebd2fe7296fdefe4ae8b" gracePeriod=600 Jan 22 11:17:28 crc kubenswrapper[4752]: I0122 11:17:28.520527 4752 generic.go:334] "Generic (PLEG): container finished" podID="eb8df70c-9474-4827-8831-f39fc6883d79" containerID="4f53b6143a6540349d0b24f907f48d8480e7e4207509ebd2fe7296fdefe4ae8b" exitCode=0 Jan 22 11:17:28 crc kubenswrapper[4752]: I0122 11:17:28.520642 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerDied","Data":"4f53b6143a6540349d0b24f907f48d8480e7e4207509ebd2fe7296fdefe4ae8b"} Jan 22 11:17:28 crc kubenswrapper[4752]: I0122 11:17:28.520941 4752 scope.go:117] "RemoveContainer" containerID="a3a05878df63bd7422e6a5f85a14142398ffaecda9556b5a85b69b9b9c85dc24" Jan 22 11:17:28 crc kubenswrapper[4752]: E0122 11:17:28.869521 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:17:29 crc kubenswrapper[4752]: I0122 11:17:29.534053 4752 scope.go:117] "RemoveContainer" containerID="4f53b6143a6540349d0b24f907f48d8480e7e4207509ebd2fe7296fdefe4ae8b" Jan 22 11:17:29 crc kubenswrapper[4752]: E0122 11:17:29.534334 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:17:40 crc kubenswrapper[4752]: I0122 11:17:40.098938 4752 scope.go:117] "RemoveContainer" containerID="4f53b6143a6540349d0b24f907f48d8480e7e4207509ebd2fe7296fdefe4ae8b" Jan 22 11:17:40 crc kubenswrapper[4752]: E0122 11:17:40.099810 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:17:52 crc kubenswrapper[4752]: I0122 11:17:52.097786 4752 scope.go:117] "RemoveContainer" containerID="4f53b6143a6540349d0b24f907f48d8480e7e4207509ebd2fe7296fdefe4ae8b" Jan 22 11:17:52 crc kubenswrapper[4752]: E0122 11:17:52.098577 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:18:07 crc kubenswrapper[4752]: I0122 11:18:07.098521 4752 scope.go:117] "RemoveContainer" containerID="4f53b6143a6540349d0b24f907f48d8480e7e4207509ebd2fe7296fdefe4ae8b" Jan 22 11:18:07 crc kubenswrapper[4752]: E0122 11:18:07.099589 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:18:21 crc kubenswrapper[4752]: I0122 11:18:21.106872 4752 scope.go:117] "RemoveContainer" containerID="4f53b6143a6540349d0b24f907f48d8480e7e4207509ebd2fe7296fdefe4ae8b" Jan 22 11:18:21 crc kubenswrapper[4752]: E0122 11:18:21.108921 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:18:35 crc kubenswrapper[4752]: I0122 11:18:35.097821 4752 scope.go:117] "RemoveContainer" containerID="4f53b6143a6540349d0b24f907f48d8480e7e4207509ebd2fe7296fdefe4ae8b" Jan 22 11:18:35 crc kubenswrapper[4752]: E0122 11:18:35.099010 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:18:47 crc kubenswrapper[4752]: I0122 11:18:47.098798 4752 scope.go:117] "RemoveContainer" containerID="4f53b6143a6540349d0b24f907f48d8480e7e4207509ebd2fe7296fdefe4ae8b" Jan 22 11:18:47 crc kubenswrapper[4752]: E0122 11:18:47.099953 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:19:01 crc kubenswrapper[4752]: I0122 11:19:01.116199 4752 scope.go:117] "RemoveContainer" containerID="4f53b6143a6540349d0b24f907f48d8480e7e4207509ebd2fe7296fdefe4ae8b" Jan 22 11:19:01 crc kubenswrapper[4752]: E0122 11:19:01.118285 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:19:12 crc kubenswrapper[4752]: I0122 11:19:12.097966 4752 scope.go:117] "RemoveContainer" containerID="4f53b6143a6540349d0b24f907f48d8480e7e4207509ebd2fe7296fdefe4ae8b" Jan 22 11:19:12 crc kubenswrapper[4752]: E0122 11:19:12.098947 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:19:24 crc kubenswrapper[4752]: I0122 11:19:24.098188 4752 scope.go:117] "RemoveContainer" containerID="4f53b6143a6540349d0b24f907f48d8480e7e4207509ebd2fe7296fdefe4ae8b" Jan 22 11:19:24 crc kubenswrapper[4752]: E0122 11:19:24.099167 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:19:35 crc kubenswrapper[4752]: I0122 11:19:35.228071 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r9tlf"] Jan 22 11:19:35 crc kubenswrapper[4752]: E0122 11:19:35.229636 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d32bf1f-86e9-4c48-8dbd-54ee831d235c" containerName="collect-profiles" Jan 22 11:19:35 crc kubenswrapper[4752]: I0122 11:19:35.229662 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d32bf1f-86e9-4c48-8dbd-54ee831d235c" containerName="collect-profiles" Jan 22 11:19:35 crc kubenswrapper[4752]: I0122 11:19:35.230037 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d32bf1f-86e9-4c48-8dbd-54ee831d235c" containerName="collect-profiles" Jan 22 11:19:35 crc kubenswrapper[4752]: I0122 11:19:35.232875 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r9tlf" Jan 22 11:19:35 crc kubenswrapper[4752]: I0122 11:19:35.253904 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r9tlf"] Jan 22 11:19:35 crc kubenswrapper[4752]: I0122 11:19:35.357149 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a2d6e10-bf4a-44bc-a910-836caa618f5d-catalog-content\") pod \"certified-operators-r9tlf\" (UID: \"0a2d6e10-bf4a-44bc-a910-836caa618f5d\") " pod="openshift-marketplace/certified-operators-r9tlf" Jan 22 11:19:35 crc kubenswrapper[4752]: I0122 11:19:35.357434 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a2d6e10-bf4a-44bc-a910-836caa618f5d-utilities\") pod \"certified-operators-r9tlf\" (UID: \"0a2d6e10-bf4a-44bc-a910-836caa618f5d\") " pod="openshift-marketplace/certified-operators-r9tlf" Jan 22 11:19:35 crc kubenswrapper[4752]: I0122 11:19:35.357802 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxth7\" (UniqueName: \"kubernetes.io/projected/0a2d6e10-bf4a-44bc-a910-836caa618f5d-kube-api-access-gxth7\") pod \"certified-operators-r9tlf\" (UID: \"0a2d6e10-bf4a-44bc-a910-836caa618f5d\") " pod="openshift-marketplace/certified-operators-r9tlf" Jan 22 11:19:35 crc kubenswrapper[4752]: I0122 11:19:35.459724 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a2d6e10-bf4a-44bc-a910-836caa618f5d-catalog-content\") pod \"certified-operators-r9tlf\" (UID: \"0a2d6e10-bf4a-44bc-a910-836caa618f5d\") " pod="openshift-marketplace/certified-operators-r9tlf" Jan 22 11:19:35 crc kubenswrapper[4752]: I0122 11:19:35.459809 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a2d6e10-bf4a-44bc-a910-836caa618f5d-utilities\") pod \"certified-operators-r9tlf\" (UID: \"0a2d6e10-bf4a-44bc-a910-836caa618f5d\") " pod="openshift-marketplace/certified-operators-r9tlf" Jan 22 11:19:35 crc kubenswrapper[4752]: I0122 11:19:35.459904 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxth7\" (UniqueName: \"kubernetes.io/projected/0a2d6e10-bf4a-44bc-a910-836caa618f5d-kube-api-access-gxth7\") pod \"certified-operators-r9tlf\" (UID: \"0a2d6e10-bf4a-44bc-a910-836caa618f5d\") " pod="openshift-marketplace/certified-operators-r9tlf" Jan 22 11:19:35 crc kubenswrapper[4752]: I0122 11:19:35.460936 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a2d6e10-bf4a-44bc-a910-836caa618f5d-utilities\") pod \"certified-operators-r9tlf\" (UID: \"0a2d6e10-bf4a-44bc-a910-836caa618f5d\") " pod="openshift-marketplace/certified-operators-r9tlf" Jan 22 11:19:35 crc kubenswrapper[4752]: I0122 11:19:35.460935 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a2d6e10-bf4a-44bc-a910-836caa618f5d-catalog-content\") pod \"certified-operators-r9tlf\" (UID: \"0a2d6e10-bf4a-44bc-a910-836caa618f5d\") " pod="openshift-marketplace/certified-operators-r9tlf" Jan 22 11:19:35 crc kubenswrapper[4752]: I0122 11:19:35.489105 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxth7\" (UniqueName: \"kubernetes.io/projected/0a2d6e10-bf4a-44bc-a910-836caa618f5d-kube-api-access-gxth7\") pod \"certified-operators-r9tlf\" (UID: \"0a2d6e10-bf4a-44bc-a910-836caa618f5d\") " pod="openshift-marketplace/certified-operators-r9tlf" Jan 22 11:19:35 crc kubenswrapper[4752]: I0122 11:19:35.556838 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r9tlf" Jan 22 11:19:35 crc kubenswrapper[4752]: I0122 11:19:35.971559 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r9tlf"] Jan 22 11:19:35 crc kubenswrapper[4752]: W0122 11:19:35.979499 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a2d6e10_bf4a_44bc_a910_836caa618f5d.slice/crio-a9a01e1018c38b5119090be0c0bb0d22da7bf2fca88e4b8d4ea8a230d7ceaa88 WatchSource:0}: Error finding container a9a01e1018c38b5119090be0c0bb0d22da7bf2fca88e4b8d4ea8a230d7ceaa88: Status 404 returned error can't find the container with id a9a01e1018c38b5119090be0c0bb0d22da7bf2fca88e4b8d4ea8a230d7ceaa88 Jan 22 11:19:36 crc kubenswrapper[4752]: I0122 11:19:36.993471 4752 generic.go:334] "Generic (PLEG): container finished" podID="0a2d6e10-bf4a-44bc-a910-836caa618f5d" containerID="5173ebfc5be5a45639a15be5ac357a7462bc1505a031afaf42b2a34d015b771d" exitCode=0 Jan 22 11:19:36 crc kubenswrapper[4752]: I0122 11:19:36.993624 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9tlf" event={"ID":"0a2d6e10-bf4a-44bc-a910-836caa618f5d","Type":"ContainerDied","Data":"5173ebfc5be5a45639a15be5ac357a7462bc1505a031afaf42b2a34d015b771d"} Jan 22 11:19:36 crc kubenswrapper[4752]: I0122 11:19:36.993874 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9tlf" event={"ID":"0a2d6e10-bf4a-44bc-a910-836caa618f5d","Type":"ContainerStarted","Data":"a9a01e1018c38b5119090be0c0bb0d22da7bf2fca88e4b8d4ea8a230d7ceaa88"} Jan 22 11:19:38 crc kubenswrapper[4752]: I0122 11:19:38.099031 4752 scope.go:117] "RemoveContainer" containerID="4f53b6143a6540349d0b24f907f48d8480e7e4207509ebd2fe7296fdefe4ae8b" Jan 22 11:19:38 crc kubenswrapper[4752]: E0122 11:19:38.099987 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:19:39 crc kubenswrapper[4752]: I0122 11:19:39.017116 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9tlf" event={"ID":"0a2d6e10-bf4a-44bc-a910-836caa618f5d","Type":"ContainerStarted","Data":"9cdb375367c5deb726ec84350cdc02d251cea33a8498a4aef1947c202e6b2016"} Jan 22 11:19:41 crc kubenswrapper[4752]: I0122 11:19:41.042222 4752 generic.go:334] "Generic (PLEG): container finished" podID="0a2d6e10-bf4a-44bc-a910-836caa618f5d" containerID="9cdb375367c5deb726ec84350cdc02d251cea33a8498a4aef1947c202e6b2016" exitCode=0 Jan 22 11:19:41 crc kubenswrapper[4752]: I0122 11:19:41.042309 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9tlf" event={"ID":"0a2d6e10-bf4a-44bc-a910-836caa618f5d","Type":"ContainerDied","Data":"9cdb375367c5deb726ec84350cdc02d251cea33a8498a4aef1947c202e6b2016"} Jan 22 11:19:43 crc kubenswrapper[4752]: I0122 11:19:43.066156 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9tlf" event={"ID":"0a2d6e10-bf4a-44bc-a910-836caa618f5d","Type":"ContainerStarted","Data":"54744f8eb55937fc07bcb9739c777fdbec8e123643c80d829c3023c6d7c0be28"} Jan 22 11:19:43 crc kubenswrapper[4752]: I0122 11:19:43.091672 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r9tlf" podStartSLOduration=2.983545163 podStartE2EDuration="8.091645397s" podCreationTimestamp="2026-01-22 11:19:35 +0000 UTC" firstStartedPulling="2026-01-22 11:19:36.997033807 +0000 UTC m=+3256.226976715" lastFinishedPulling="2026-01-22 11:19:42.105134041 +0000 UTC m=+3261.335076949" observedRunningTime="2026-01-22 11:19:43.083616922 +0000 UTC m=+3262.313559840" watchObservedRunningTime="2026-01-22 11:19:43.091645397 +0000 UTC m=+3262.321588335" Jan 22 11:19:45 crc kubenswrapper[4752]: I0122 11:19:45.557985 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r9tlf" Jan 22 11:19:45 crc kubenswrapper[4752]: I0122 11:19:45.558607 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r9tlf" Jan 22 11:19:45 crc kubenswrapper[4752]: I0122 11:19:45.615389 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r9tlf" Jan 22 11:19:51 crc kubenswrapper[4752]: I0122 11:19:51.109026 4752 scope.go:117] "RemoveContainer" containerID="4f53b6143a6540349d0b24f907f48d8480e7e4207509ebd2fe7296fdefe4ae8b" Jan 22 11:19:51 crc kubenswrapper[4752]: E0122 11:19:51.109637 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:19:55 crc kubenswrapper[4752]: I0122 11:19:55.669502 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r9tlf" Jan 22 11:19:55 crc kubenswrapper[4752]: I0122 11:19:55.724934 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r9tlf"] Jan 22 11:19:56 crc kubenswrapper[4752]: I0122 11:19:56.207755 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r9tlf" podUID="0a2d6e10-bf4a-44bc-a910-836caa618f5d" containerName="registry-server" containerID="cri-o://54744f8eb55937fc07bcb9739c777fdbec8e123643c80d829c3023c6d7c0be28" gracePeriod=2 Jan 22 11:19:57 crc kubenswrapper[4752]: I0122 11:19:57.230328 4752 generic.go:334] "Generic (PLEG): container finished" podID="0a2d6e10-bf4a-44bc-a910-836caa618f5d" containerID="54744f8eb55937fc07bcb9739c777fdbec8e123643c80d829c3023c6d7c0be28" exitCode=0 Jan 22 11:19:57 crc kubenswrapper[4752]: I0122 11:19:57.230511 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9tlf" event={"ID":"0a2d6e10-bf4a-44bc-a910-836caa618f5d","Type":"ContainerDied","Data":"54744f8eb55937fc07bcb9739c777fdbec8e123643c80d829c3023c6d7c0be28"} Jan 22 11:19:57 crc kubenswrapper[4752]: I0122 11:19:57.341171 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r9tlf" Jan 22 11:19:57 crc kubenswrapper[4752]: I0122 11:19:57.446734 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a2d6e10-bf4a-44bc-a910-836caa618f5d-utilities\") pod \"0a2d6e10-bf4a-44bc-a910-836caa618f5d\" (UID: \"0a2d6e10-bf4a-44bc-a910-836caa618f5d\") " Jan 22 11:19:57 crc kubenswrapper[4752]: I0122 11:19:57.447455 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a2d6e10-bf4a-44bc-a910-836caa618f5d-utilities" (OuterVolumeSpecName: "utilities") pod "0a2d6e10-bf4a-44bc-a910-836caa618f5d" (UID: "0a2d6e10-bf4a-44bc-a910-836caa618f5d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:19:57 crc kubenswrapper[4752]: I0122 11:19:57.447576 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a2d6e10-bf4a-44bc-a910-836caa618f5d-catalog-content\") pod \"0a2d6e10-bf4a-44bc-a910-836caa618f5d\" (UID: \"0a2d6e10-bf4a-44bc-a910-836caa618f5d\") " Jan 22 11:19:57 crc kubenswrapper[4752]: I0122 11:19:57.447789 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxth7\" (UniqueName: \"kubernetes.io/projected/0a2d6e10-bf4a-44bc-a910-836caa618f5d-kube-api-access-gxth7\") pod \"0a2d6e10-bf4a-44bc-a910-836caa618f5d\" (UID: \"0a2d6e10-bf4a-44bc-a910-836caa618f5d\") " Jan 22 11:19:57 crc kubenswrapper[4752]: I0122 11:19:57.448992 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a2d6e10-bf4a-44bc-a910-836caa618f5d-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:19:57 crc kubenswrapper[4752]: I0122 11:19:57.457078 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a2d6e10-bf4a-44bc-a910-836caa618f5d-kube-api-access-gxth7" (OuterVolumeSpecName: "kube-api-access-gxth7") pod "0a2d6e10-bf4a-44bc-a910-836caa618f5d" (UID: "0a2d6e10-bf4a-44bc-a910-836caa618f5d"). InnerVolumeSpecName "kube-api-access-gxth7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:19:57 crc kubenswrapper[4752]: I0122 11:19:57.510219 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a2d6e10-bf4a-44bc-a910-836caa618f5d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a2d6e10-bf4a-44bc-a910-836caa618f5d" (UID: "0a2d6e10-bf4a-44bc-a910-836caa618f5d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:19:57 crc kubenswrapper[4752]: I0122 11:19:57.551394 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxth7\" (UniqueName: \"kubernetes.io/projected/0a2d6e10-bf4a-44bc-a910-836caa618f5d-kube-api-access-gxth7\") on node \"crc\" DevicePath \"\"" Jan 22 11:19:57 crc kubenswrapper[4752]: I0122 11:19:57.551424 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a2d6e10-bf4a-44bc-a910-836caa618f5d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:19:58 crc kubenswrapper[4752]: I0122 11:19:58.243751 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9tlf" event={"ID":"0a2d6e10-bf4a-44bc-a910-836caa618f5d","Type":"ContainerDied","Data":"a9a01e1018c38b5119090be0c0bb0d22da7bf2fca88e4b8d4ea8a230d7ceaa88"} Jan 22 11:19:58 crc kubenswrapper[4752]: I0122 11:19:58.243884 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r9tlf" Jan 22 11:19:58 crc kubenswrapper[4752]: I0122 11:19:58.244979 4752 scope.go:117] "RemoveContainer" containerID="54744f8eb55937fc07bcb9739c777fdbec8e123643c80d829c3023c6d7c0be28" Jan 22 11:19:58 crc kubenswrapper[4752]: I0122 11:19:58.273541 4752 scope.go:117] "RemoveContainer" containerID="9cdb375367c5deb726ec84350cdc02d251cea33a8498a4aef1947c202e6b2016" Jan 22 11:19:58 crc kubenswrapper[4752]: I0122 11:19:58.297974 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r9tlf"] Jan 22 11:19:58 crc kubenswrapper[4752]: I0122 11:19:58.306388 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r9tlf"] Jan 22 11:19:58 crc kubenswrapper[4752]: I0122 11:19:58.320778 4752 scope.go:117] "RemoveContainer" containerID="5173ebfc5be5a45639a15be5ac357a7462bc1505a031afaf42b2a34d015b771d" Jan 22 11:19:59 crc kubenswrapper[4752]: I0122 11:19:59.111871 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a2d6e10-bf4a-44bc-a910-836caa618f5d" path="/var/lib/kubelet/pods/0a2d6e10-bf4a-44bc-a910-836caa618f5d/volumes" Jan 22 11:20:05 crc kubenswrapper[4752]: I0122 11:20:05.098469 4752 scope.go:117] "RemoveContainer" containerID="4f53b6143a6540349d0b24f907f48d8480e7e4207509ebd2fe7296fdefe4ae8b" Jan 22 11:20:05 crc kubenswrapper[4752]: E0122 11:20:05.099783 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:20:20 crc kubenswrapper[4752]: I0122 11:20:20.098825 4752 scope.go:117] "RemoveContainer" containerID="4f53b6143a6540349d0b24f907f48d8480e7e4207509ebd2fe7296fdefe4ae8b" Jan 22 11:20:20 crc kubenswrapper[4752]: E0122 11:20:20.100091 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:20:35 crc kubenswrapper[4752]: I0122 11:20:35.097387 4752 scope.go:117] "RemoveContainer" containerID="4f53b6143a6540349d0b24f907f48d8480e7e4207509ebd2fe7296fdefe4ae8b" Jan 22 11:20:35 crc kubenswrapper[4752]: E0122 11:20:35.098364 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:20:49 crc kubenswrapper[4752]: I0122 11:20:49.098408 4752 scope.go:117] "RemoveContainer" containerID="4f53b6143a6540349d0b24f907f48d8480e7e4207509ebd2fe7296fdefe4ae8b" Jan 22 11:20:49 crc kubenswrapper[4752]: E0122 11:20:49.099376 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:21:02 crc kubenswrapper[4752]: I0122 11:21:02.098607 4752 scope.go:117] "RemoveContainer" containerID="4f53b6143a6540349d0b24f907f48d8480e7e4207509ebd2fe7296fdefe4ae8b" Jan 22 11:21:02 crc kubenswrapper[4752]: E0122 11:21:02.099559 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:21:13 crc kubenswrapper[4752]: I0122 11:21:13.098428 4752 scope.go:117] "RemoveContainer" containerID="4f53b6143a6540349d0b24f907f48d8480e7e4207509ebd2fe7296fdefe4ae8b" Jan 22 11:21:13 crc kubenswrapper[4752]: E0122 11:21:13.099245 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:21:24 crc kubenswrapper[4752]: I0122 11:21:24.098358 4752 scope.go:117] "RemoveContainer" containerID="4f53b6143a6540349d0b24f907f48d8480e7e4207509ebd2fe7296fdefe4ae8b" Jan 22 11:21:24 crc kubenswrapper[4752]: E0122 11:21:24.100804 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:21:37 crc kubenswrapper[4752]: I0122 11:21:37.098323 4752 scope.go:117] "RemoveContainer" containerID="4f53b6143a6540349d0b24f907f48d8480e7e4207509ebd2fe7296fdefe4ae8b" Jan 22 11:21:37 crc kubenswrapper[4752]: E0122 11:21:37.099767 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:21:49 crc kubenswrapper[4752]: I0122 11:21:49.098596 4752 scope.go:117] "RemoveContainer" containerID="4f53b6143a6540349d0b24f907f48d8480e7e4207509ebd2fe7296fdefe4ae8b" Jan 22 11:21:49 crc kubenswrapper[4752]: E0122 11:21:49.099656 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:22:03 crc kubenswrapper[4752]: I0122 11:22:03.099414 4752 scope.go:117] "RemoveContainer" containerID="4f53b6143a6540349d0b24f907f48d8480e7e4207509ebd2fe7296fdefe4ae8b" Jan 22 11:22:03 crc kubenswrapper[4752]: E0122 11:22:03.100214 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:22:17 crc kubenswrapper[4752]: I0122 11:22:17.097893 4752 scope.go:117] "RemoveContainer" containerID="4f53b6143a6540349d0b24f907f48d8480e7e4207509ebd2fe7296fdefe4ae8b" Jan 22 11:22:17 crc kubenswrapper[4752]: E0122 11:22:17.098604 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:22:32 crc kubenswrapper[4752]: I0122 11:22:32.097834 4752 scope.go:117] "RemoveContainer" containerID="4f53b6143a6540349d0b24f907f48d8480e7e4207509ebd2fe7296fdefe4ae8b" Jan 22 11:22:32 crc kubenswrapper[4752]: I0122 11:22:32.873167 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerStarted","Data":"bcb09f4b5e636fffb88830672f7b4d5c7bc907b18986d8a908ea6b3641114c5c"} Jan 22 11:22:37 crc kubenswrapper[4752]: I0122 11:22:37.068276 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zxqwz"] Jan 22 11:22:37 crc kubenswrapper[4752]: E0122 11:22:37.069358 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a2d6e10-bf4a-44bc-a910-836caa618f5d" containerName="extract-utilities" Jan 22 11:22:37 crc kubenswrapper[4752]: I0122 11:22:37.069377 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a2d6e10-bf4a-44bc-a910-836caa618f5d" containerName="extract-utilities" Jan 22 11:22:37 crc kubenswrapper[4752]: E0122 11:22:37.069424 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a2d6e10-bf4a-44bc-a910-836caa618f5d" containerName="extract-content" Jan 22 11:22:37 crc kubenswrapper[4752]: I0122 11:22:37.069432 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a2d6e10-bf4a-44bc-a910-836caa618f5d" containerName="extract-content" Jan 22 11:22:37 crc kubenswrapper[4752]: E0122 11:22:37.069448 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a2d6e10-bf4a-44bc-a910-836caa618f5d" containerName="registry-server" Jan 22 11:22:37 crc kubenswrapper[4752]: I0122 11:22:37.069455 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a2d6e10-bf4a-44bc-a910-836caa618f5d" containerName="registry-server" Jan 22 11:22:37 crc kubenswrapper[4752]: I0122 11:22:37.069688 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a2d6e10-bf4a-44bc-a910-836caa618f5d" containerName="registry-server" Jan 22 11:22:37 crc kubenswrapper[4752]: I0122 11:22:37.071606 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zxqwz" Jan 22 11:22:37 crc kubenswrapper[4752]: I0122 11:22:37.096632 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zxqwz"] Jan 22 11:22:37 crc kubenswrapper[4752]: I0122 11:22:37.211031 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsjpz\" (UniqueName: \"kubernetes.io/projected/e9e77e25-1d0a-4901-910c-06bb728b5179-kube-api-access-hsjpz\") pod \"redhat-operators-zxqwz\" (UID: \"e9e77e25-1d0a-4901-910c-06bb728b5179\") " pod="openshift-marketplace/redhat-operators-zxqwz" Jan 22 11:22:37 crc kubenswrapper[4752]: I0122 11:22:37.211081 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9e77e25-1d0a-4901-910c-06bb728b5179-catalog-content\") pod \"redhat-operators-zxqwz\" (UID: \"e9e77e25-1d0a-4901-910c-06bb728b5179\") " pod="openshift-marketplace/redhat-operators-zxqwz" Jan 22 11:22:37 crc kubenswrapper[4752]: I0122 11:22:37.211346 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9e77e25-1d0a-4901-910c-06bb728b5179-utilities\") pod \"redhat-operators-zxqwz\" (UID: \"e9e77e25-1d0a-4901-910c-06bb728b5179\") " pod="openshift-marketplace/redhat-operators-zxqwz" Jan 22 11:22:37 crc kubenswrapper[4752]: I0122 11:22:37.313345 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9e77e25-1d0a-4901-910c-06bb728b5179-utilities\") pod \"redhat-operators-zxqwz\" (UID: \"e9e77e25-1d0a-4901-910c-06bb728b5179\") " pod="openshift-marketplace/redhat-operators-zxqwz" Jan 22 11:22:37 crc kubenswrapper[4752]: I0122 11:22:37.313523 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsjpz\" (UniqueName: \"kubernetes.io/projected/e9e77e25-1d0a-4901-910c-06bb728b5179-kube-api-access-hsjpz\") pod \"redhat-operators-zxqwz\" (UID: \"e9e77e25-1d0a-4901-910c-06bb728b5179\") " pod="openshift-marketplace/redhat-operators-zxqwz" Jan 22 11:22:37 crc kubenswrapper[4752]: I0122 11:22:37.313546 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9e77e25-1d0a-4901-910c-06bb728b5179-catalog-content\") pod \"redhat-operators-zxqwz\" (UID: \"e9e77e25-1d0a-4901-910c-06bb728b5179\") " pod="openshift-marketplace/redhat-operators-zxqwz" Jan 22 11:22:37 crc kubenswrapper[4752]: I0122 11:22:37.314176 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9e77e25-1d0a-4901-910c-06bb728b5179-catalog-content\") pod \"redhat-operators-zxqwz\" (UID: \"e9e77e25-1d0a-4901-910c-06bb728b5179\") " pod="openshift-marketplace/redhat-operators-zxqwz" Jan 22 11:22:37 crc kubenswrapper[4752]: I0122 11:22:37.314175 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9e77e25-1d0a-4901-910c-06bb728b5179-utilities\") pod \"redhat-operators-zxqwz\" (UID: \"e9e77e25-1d0a-4901-910c-06bb728b5179\") " pod="openshift-marketplace/redhat-operators-zxqwz" Jan 22 11:22:37 crc kubenswrapper[4752]: I0122 11:22:37.336664 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsjpz\" (UniqueName: \"kubernetes.io/projected/e9e77e25-1d0a-4901-910c-06bb728b5179-kube-api-access-hsjpz\") pod \"redhat-operators-zxqwz\" (UID: \"e9e77e25-1d0a-4901-910c-06bb728b5179\") " pod="openshift-marketplace/redhat-operators-zxqwz" Jan 22 11:22:37 crc kubenswrapper[4752]: I0122 11:22:37.400725 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zxqwz" Jan 22 11:22:37 crc kubenswrapper[4752]: I0122 11:22:37.954387 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zxqwz"] Jan 22 11:22:38 crc kubenswrapper[4752]: I0122 11:22:38.936332 4752 generic.go:334] "Generic (PLEG): container finished" podID="e9e77e25-1d0a-4901-910c-06bb728b5179" containerID="0b683883999d607c55454d53ec72c3a6d3284e28dbf2b6753e066f176875233e" exitCode=0 Jan 22 11:22:38 crc kubenswrapper[4752]: I0122 11:22:38.936465 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxqwz" event={"ID":"e9e77e25-1d0a-4901-910c-06bb728b5179","Type":"ContainerDied","Data":"0b683883999d607c55454d53ec72c3a6d3284e28dbf2b6753e066f176875233e"} Jan 22 11:22:38 crc kubenswrapper[4752]: I0122 11:22:38.936617 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxqwz" event={"ID":"e9e77e25-1d0a-4901-910c-06bb728b5179","Type":"ContainerStarted","Data":"4312b3a83babc27eeb2a1b08a978599677f5cf12f0811e2e1062d8a87ac16523"} Jan 22 11:22:38 crc kubenswrapper[4752]: I0122 11:22:38.942085 4752 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 11:22:39 crc kubenswrapper[4752]: I0122 11:22:39.953930 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxqwz" event={"ID":"e9e77e25-1d0a-4901-910c-06bb728b5179","Type":"ContainerStarted","Data":"e657075f8dcb04e136686e7f042e6bbaa797193c4bac8657aa350cb5c497531d"} Jan 22 11:22:44 crc kubenswrapper[4752]: I0122 11:22:44.015120 4752 generic.go:334] "Generic (PLEG): container finished" podID="e9e77e25-1d0a-4901-910c-06bb728b5179" containerID="e657075f8dcb04e136686e7f042e6bbaa797193c4bac8657aa350cb5c497531d" exitCode=0 Jan 22 11:22:44 crc kubenswrapper[4752]: I0122 11:22:44.015239 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxqwz" event={"ID":"e9e77e25-1d0a-4901-910c-06bb728b5179","Type":"ContainerDied","Data":"e657075f8dcb04e136686e7f042e6bbaa797193c4bac8657aa350cb5c497531d"} Jan 22 11:22:45 crc kubenswrapper[4752]: I0122 11:22:45.030638 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxqwz" event={"ID":"e9e77e25-1d0a-4901-910c-06bb728b5179","Type":"ContainerStarted","Data":"a7673e52c66f42d542b3556c0fc2e39b903ed61f4ed771206b31173051f05b24"} Jan 22 11:22:45 crc kubenswrapper[4752]: I0122 11:22:45.070483 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zxqwz" podStartSLOduration=2.610047685 podStartE2EDuration="8.070466632s" podCreationTimestamp="2026-01-22 11:22:37 +0000 UTC" firstStartedPulling="2026-01-22 11:22:38.941621817 +0000 UTC m=+3438.171564745" lastFinishedPulling="2026-01-22 11:22:44.402040784 +0000 UTC m=+3443.631983692" observedRunningTime="2026-01-22 11:22:45.0551548 +0000 UTC m=+3444.285097718" watchObservedRunningTime="2026-01-22 11:22:45.070466632 +0000 UTC m=+3444.300409540" Jan 22 11:22:47 crc kubenswrapper[4752]: I0122 11:22:47.401339 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zxqwz" Jan 22 11:22:47 crc kubenswrapper[4752]: I0122 11:22:47.403065 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zxqwz" Jan 22 11:22:48 crc kubenswrapper[4752]: I0122 11:22:48.465191 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zxqwz" podUID="e9e77e25-1d0a-4901-910c-06bb728b5179" containerName="registry-server" probeResult="failure" output=< Jan 22 11:22:48 crc kubenswrapper[4752]: timeout: failed to connect service ":50051" within 1s Jan 22 11:22:48 crc kubenswrapper[4752]: > Jan 22 11:22:57 crc kubenswrapper[4752]: I0122 11:22:57.454578 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zxqwz" Jan 22 11:22:57 crc kubenswrapper[4752]: I0122 11:22:57.519788 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zxqwz" Jan 22 11:22:57 crc kubenswrapper[4752]: I0122 11:22:57.696774 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zxqwz"] Jan 22 11:22:59 crc kubenswrapper[4752]: I0122 11:22:59.181495 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zxqwz" podUID="e9e77e25-1d0a-4901-910c-06bb728b5179" containerName="registry-server" containerID="cri-o://a7673e52c66f42d542b3556c0fc2e39b903ed61f4ed771206b31173051f05b24" gracePeriod=2 Jan 22 11:22:59 crc kubenswrapper[4752]: I0122 11:22:59.687023 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zxqwz" Jan 22 11:22:59 crc kubenswrapper[4752]: I0122 11:22:59.849888 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9e77e25-1d0a-4901-910c-06bb728b5179-catalog-content\") pod \"e9e77e25-1d0a-4901-910c-06bb728b5179\" (UID: \"e9e77e25-1d0a-4901-910c-06bb728b5179\") " Jan 22 11:22:59 crc kubenswrapper[4752]: I0122 11:22:59.850128 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsjpz\" (UniqueName: \"kubernetes.io/projected/e9e77e25-1d0a-4901-910c-06bb728b5179-kube-api-access-hsjpz\") pod \"e9e77e25-1d0a-4901-910c-06bb728b5179\" (UID: \"e9e77e25-1d0a-4901-910c-06bb728b5179\") " Jan 22 11:22:59 crc kubenswrapper[4752]: I0122 11:22:59.850259 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9e77e25-1d0a-4901-910c-06bb728b5179-utilities\") pod \"e9e77e25-1d0a-4901-910c-06bb728b5179\" (UID: \"e9e77e25-1d0a-4901-910c-06bb728b5179\") " Jan 22 11:22:59 crc kubenswrapper[4752]: I0122 11:22:59.851618 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9e77e25-1d0a-4901-910c-06bb728b5179-utilities" (OuterVolumeSpecName: "utilities") pod "e9e77e25-1d0a-4901-910c-06bb728b5179" (UID: "e9e77e25-1d0a-4901-910c-06bb728b5179"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:22:59 crc kubenswrapper[4752]: I0122 11:22:59.859103 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9e77e25-1d0a-4901-910c-06bb728b5179-kube-api-access-hsjpz" (OuterVolumeSpecName: "kube-api-access-hsjpz") pod "e9e77e25-1d0a-4901-910c-06bb728b5179" (UID: "e9e77e25-1d0a-4901-910c-06bb728b5179"). InnerVolumeSpecName "kube-api-access-hsjpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:22:59 crc kubenswrapper[4752]: I0122 11:22:59.953413 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsjpz\" (UniqueName: \"kubernetes.io/projected/e9e77e25-1d0a-4901-910c-06bb728b5179-kube-api-access-hsjpz\") on node \"crc\" DevicePath \"\"" Jan 22 11:22:59 crc kubenswrapper[4752]: I0122 11:22:59.953449 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9e77e25-1d0a-4901-910c-06bb728b5179-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:22:59 crc kubenswrapper[4752]: I0122 11:22:59.990877 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9e77e25-1d0a-4901-910c-06bb728b5179-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9e77e25-1d0a-4901-910c-06bb728b5179" (UID: "e9e77e25-1d0a-4901-910c-06bb728b5179"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:23:00 crc kubenswrapper[4752]: I0122 11:23:00.056814 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9e77e25-1d0a-4901-910c-06bb728b5179-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:23:00 crc kubenswrapper[4752]: I0122 11:23:00.194433 4752 generic.go:334] "Generic (PLEG): container finished" podID="e9e77e25-1d0a-4901-910c-06bb728b5179" containerID="a7673e52c66f42d542b3556c0fc2e39b903ed61f4ed771206b31173051f05b24" exitCode=0 Jan 22 11:23:00 crc kubenswrapper[4752]: I0122 11:23:00.194487 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxqwz" event={"ID":"e9e77e25-1d0a-4901-910c-06bb728b5179","Type":"ContainerDied","Data":"a7673e52c66f42d542b3556c0fc2e39b903ed61f4ed771206b31173051f05b24"} Jan 22 11:23:00 crc kubenswrapper[4752]: I0122 11:23:00.194501 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zxqwz" Jan 22 11:23:00 crc kubenswrapper[4752]: I0122 11:23:00.194543 4752 scope.go:117] "RemoveContainer" containerID="a7673e52c66f42d542b3556c0fc2e39b903ed61f4ed771206b31173051f05b24" Jan 22 11:23:00 crc kubenswrapper[4752]: I0122 11:23:00.194531 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxqwz" event={"ID":"e9e77e25-1d0a-4901-910c-06bb728b5179","Type":"ContainerDied","Data":"4312b3a83babc27eeb2a1b08a978599677f5cf12f0811e2e1062d8a87ac16523"} Jan 22 11:23:00 crc kubenswrapper[4752]: I0122 11:23:00.225179 4752 scope.go:117] "RemoveContainer" containerID="e657075f8dcb04e136686e7f042e6bbaa797193c4bac8657aa350cb5c497531d" Jan 22 11:23:00 crc kubenswrapper[4752]: I0122 11:23:00.236983 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zxqwz"] Jan 22 11:23:00 crc kubenswrapper[4752]: I0122 11:23:00.252619 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zxqwz"] Jan 22 11:23:00 crc kubenswrapper[4752]: I0122 11:23:00.269150 4752 scope.go:117] "RemoveContainer" containerID="0b683883999d607c55454d53ec72c3a6d3284e28dbf2b6753e066f176875233e" Jan 22 11:23:00 crc kubenswrapper[4752]: I0122 11:23:00.300283 4752 scope.go:117] "RemoveContainer" containerID="a7673e52c66f42d542b3556c0fc2e39b903ed61f4ed771206b31173051f05b24" Jan 22 11:23:00 crc kubenswrapper[4752]: E0122 11:23:00.300665 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7673e52c66f42d542b3556c0fc2e39b903ed61f4ed771206b31173051f05b24\": container with ID starting with a7673e52c66f42d542b3556c0fc2e39b903ed61f4ed771206b31173051f05b24 not found: ID does not exist" containerID="a7673e52c66f42d542b3556c0fc2e39b903ed61f4ed771206b31173051f05b24" Jan 22 11:23:00 crc kubenswrapper[4752]: I0122 11:23:00.300705 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7673e52c66f42d542b3556c0fc2e39b903ed61f4ed771206b31173051f05b24"} err="failed to get container status \"a7673e52c66f42d542b3556c0fc2e39b903ed61f4ed771206b31173051f05b24\": rpc error: code = NotFound desc = could not find container \"a7673e52c66f42d542b3556c0fc2e39b903ed61f4ed771206b31173051f05b24\": container with ID starting with a7673e52c66f42d542b3556c0fc2e39b903ed61f4ed771206b31173051f05b24 not found: ID does not exist" Jan 22 11:23:00 crc kubenswrapper[4752]: I0122 11:23:00.300736 4752 scope.go:117] "RemoveContainer" containerID="e657075f8dcb04e136686e7f042e6bbaa797193c4bac8657aa350cb5c497531d" Jan 22 11:23:00 crc kubenswrapper[4752]: E0122 11:23:00.301131 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e657075f8dcb04e136686e7f042e6bbaa797193c4bac8657aa350cb5c497531d\": container with ID starting with e657075f8dcb04e136686e7f042e6bbaa797193c4bac8657aa350cb5c497531d not found: ID does not exist" containerID="e657075f8dcb04e136686e7f042e6bbaa797193c4bac8657aa350cb5c497531d" Jan 22 11:23:00 crc kubenswrapper[4752]: I0122 11:23:00.301162 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e657075f8dcb04e136686e7f042e6bbaa797193c4bac8657aa350cb5c497531d"} err="failed to get container status \"e657075f8dcb04e136686e7f042e6bbaa797193c4bac8657aa350cb5c497531d\": rpc error: code = NotFound desc = could not find container \"e657075f8dcb04e136686e7f042e6bbaa797193c4bac8657aa350cb5c497531d\": container with ID starting with e657075f8dcb04e136686e7f042e6bbaa797193c4bac8657aa350cb5c497531d not found: ID does not exist" Jan 22 11:23:00 crc kubenswrapper[4752]: I0122 11:23:00.301178 4752 scope.go:117] "RemoveContainer" containerID="0b683883999d607c55454d53ec72c3a6d3284e28dbf2b6753e066f176875233e" Jan 22 11:23:00 crc kubenswrapper[4752]: E0122 11:23:00.301468 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b683883999d607c55454d53ec72c3a6d3284e28dbf2b6753e066f176875233e\": container with ID starting with 0b683883999d607c55454d53ec72c3a6d3284e28dbf2b6753e066f176875233e not found: ID does not exist" containerID="0b683883999d607c55454d53ec72c3a6d3284e28dbf2b6753e066f176875233e" Jan 22 11:23:00 crc kubenswrapper[4752]: I0122 11:23:00.301498 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b683883999d607c55454d53ec72c3a6d3284e28dbf2b6753e066f176875233e"} err="failed to get container status \"0b683883999d607c55454d53ec72c3a6d3284e28dbf2b6753e066f176875233e\": rpc error: code = NotFound desc = could not find container \"0b683883999d607c55454d53ec72c3a6d3284e28dbf2b6753e066f176875233e\": container with ID starting with 0b683883999d607c55454d53ec72c3a6d3284e28dbf2b6753e066f176875233e not found: ID does not exist" Jan 22 11:23:01 crc kubenswrapper[4752]: I0122 11:23:01.114826 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9e77e25-1d0a-4901-910c-06bb728b5179" path="/var/lib/kubelet/pods/e9e77e25-1d0a-4901-910c-06bb728b5179/volumes" Jan 22 11:23:20 crc kubenswrapper[4752]: I0122 11:23:20.982904 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qhl8s"] Jan 22 11:23:20 crc kubenswrapper[4752]: E0122 11:23:20.984829 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9e77e25-1d0a-4901-910c-06bb728b5179" containerName="extract-content" Jan 22 11:23:20 crc kubenswrapper[4752]: I0122 11:23:20.984967 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9e77e25-1d0a-4901-910c-06bb728b5179" containerName="extract-content" Jan 22 11:23:20 crc kubenswrapper[4752]: E0122 11:23:20.985071 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9e77e25-1d0a-4901-910c-06bb728b5179" containerName="extract-utilities" Jan 22 11:23:20 crc kubenswrapper[4752]: I0122 11:23:20.985133 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9e77e25-1d0a-4901-910c-06bb728b5179" containerName="extract-utilities" Jan 22 11:23:20 crc kubenswrapper[4752]: E0122 11:23:20.985198 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9e77e25-1d0a-4901-910c-06bb728b5179" containerName="registry-server" Jan 22 11:23:20 crc kubenswrapper[4752]: I0122 11:23:20.985271 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9e77e25-1d0a-4901-910c-06bb728b5179" containerName="registry-server" Jan 22 11:23:20 crc kubenswrapper[4752]: I0122 11:23:20.985518 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9e77e25-1d0a-4901-910c-06bb728b5179" containerName="registry-server" Jan 22 11:23:20 crc kubenswrapper[4752]: I0122 11:23:20.986977 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qhl8s" Jan 22 11:23:21 crc kubenswrapper[4752]: I0122 11:23:21.006113 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qhl8s"] Jan 22 11:23:21 crc kubenswrapper[4752]: I0122 11:23:21.146539 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ef79f13-df89-4269-8022-d0582474aec2-catalog-content\") pod \"community-operators-qhl8s\" (UID: \"5ef79f13-df89-4269-8022-d0582474aec2\") " pod="openshift-marketplace/community-operators-qhl8s" Jan 22 11:23:21 crc kubenswrapper[4752]: I0122 11:23:21.146669 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ef79f13-df89-4269-8022-d0582474aec2-utilities\") pod \"community-operators-qhl8s\" (UID: \"5ef79f13-df89-4269-8022-d0582474aec2\") " pod="openshift-marketplace/community-operators-qhl8s" Jan 22 11:23:21 crc kubenswrapper[4752]: I0122 11:23:21.146775 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcx9j\" (UniqueName: \"kubernetes.io/projected/5ef79f13-df89-4269-8022-d0582474aec2-kube-api-access-qcx9j\") pod \"community-operators-qhl8s\" (UID: \"5ef79f13-df89-4269-8022-d0582474aec2\") " pod="openshift-marketplace/community-operators-qhl8s" Jan 22 11:23:21 crc kubenswrapper[4752]: I0122 11:23:21.248262 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ef79f13-df89-4269-8022-d0582474aec2-utilities\") pod \"community-operators-qhl8s\" (UID: \"5ef79f13-df89-4269-8022-d0582474aec2\") " pod="openshift-marketplace/community-operators-qhl8s" Jan 22 11:23:21 crc kubenswrapper[4752]: I0122 11:23:21.248512 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcx9j\" (UniqueName: \"kubernetes.io/projected/5ef79f13-df89-4269-8022-d0582474aec2-kube-api-access-qcx9j\") pod \"community-operators-qhl8s\" (UID: \"5ef79f13-df89-4269-8022-d0582474aec2\") " pod="openshift-marketplace/community-operators-qhl8s" Jan 22 11:23:21 crc kubenswrapper[4752]: I0122 11:23:21.248542 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ef79f13-df89-4269-8022-d0582474aec2-catalog-content\") pod \"community-operators-qhl8s\" (UID: \"5ef79f13-df89-4269-8022-d0582474aec2\") " pod="openshift-marketplace/community-operators-qhl8s" Jan 22 11:23:21 crc kubenswrapper[4752]: I0122 11:23:21.249284 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ef79f13-df89-4269-8022-d0582474aec2-catalog-content\") pod \"community-operators-qhl8s\" (UID: \"5ef79f13-df89-4269-8022-d0582474aec2\") " pod="openshift-marketplace/community-operators-qhl8s" Jan 22 11:23:21 crc kubenswrapper[4752]: I0122 11:23:21.250594 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ef79f13-df89-4269-8022-d0582474aec2-utilities\") pod \"community-operators-qhl8s\" (UID: \"5ef79f13-df89-4269-8022-d0582474aec2\") " pod="openshift-marketplace/community-operators-qhl8s" Jan 22 11:23:21 crc kubenswrapper[4752]: I0122 11:23:21.278882 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcx9j\" (UniqueName: \"kubernetes.io/projected/5ef79f13-df89-4269-8022-d0582474aec2-kube-api-access-qcx9j\") pod \"community-operators-qhl8s\" (UID: \"5ef79f13-df89-4269-8022-d0582474aec2\") " pod="openshift-marketplace/community-operators-qhl8s" Jan 22 11:23:21 crc kubenswrapper[4752]: I0122 11:23:21.310515 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qhl8s" Jan 22 11:23:21 crc kubenswrapper[4752]: I0122 11:23:21.891565 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qhl8s"] Jan 22 11:23:21 crc kubenswrapper[4752]: W0122 11:23:21.901365 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ef79f13_df89_4269_8022_d0582474aec2.slice/crio-caca16368800f49d0ddbc58856f5967451cf01318a4b4061d10d7f06836ab7ed WatchSource:0}: Error finding container caca16368800f49d0ddbc58856f5967451cf01318a4b4061d10d7f06836ab7ed: Status 404 returned error can't find the container with id caca16368800f49d0ddbc58856f5967451cf01318a4b4061d10d7f06836ab7ed Jan 22 11:23:22 crc kubenswrapper[4752]: I0122 11:23:22.435018 4752 generic.go:334] "Generic (PLEG): container finished" podID="5ef79f13-df89-4269-8022-d0582474aec2" containerID="50015c553c4d9b26bf7ea2ea2b73da94f42a085e41a1602c22d5241bb55ef733" exitCode=0 Jan 22 11:23:22 crc kubenswrapper[4752]: I0122 11:23:22.435092 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qhl8s" event={"ID":"5ef79f13-df89-4269-8022-d0582474aec2","Type":"ContainerDied","Data":"50015c553c4d9b26bf7ea2ea2b73da94f42a085e41a1602c22d5241bb55ef733"} Jan 22 11:23:22 crc kubenswrapper[4752]: I0122 11:23:22.435525 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qhl8s" event={"ID":"5ef79f13-df89-4269-8022-d0582474aec2","Type":"ContainerStarted","Data":"caca16368800f49d0ddbc58856f5967451cf01318a4b4061d10d7f06836ab7ed"} Jan 22 11:23:23 crc kubenswrapper[4752]: I0122 11:23:23.447140 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qhl8s" event={"ID":"5ef79f13-df89-4269-8022-d0582474aec2","Type":"ContainerStarted","Data":"a6c87fc46434466b131db03a5f476164d7a0e6549ad2b1e6ada0f23298a0393e"} Jan 22 11:23:24 crc kubenswrapper[4752]: I0122 11:23:24.469174 4752 generic.go:334] "Generic (PLEG): container finished" podID="5ef79f13-df89-4269-8022-d0582474aec2" containerID="a6c87fc46434466b131db03a5f476164d7a0e6549ad2b1e6ada0f23298a0393e" exitCode=0 Jan 22 11:23:24 crc kubenswrapper[4752]: I0122 11:23:24.469520 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qhl8s" event={"ID":"5ef79f13-df89-4269-8022-d0582474aec2","Type":"ContainerDied","Data":"a6c87fc46434466b131db03a5f476164d7a0e6549ad2b1e6ada0f23298a0393e"} Jan 22 11:23:25 crc kubenswrapper[4752]: I0122 11:23:25.478997 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qhl8s" event={"ID":"5ef79f13-df89-4269-8022-d0582474aec2","Type":"ContainerStarted","Data":"6e2f6a24e2fbb7633f128e3f0b3828460044311ee52baedf5fd7792142242308"} Jan 22 11:23:25 crc kubenswrapper[4752]: I0122 11:23:25.502909 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qhl8s" podStartSLOduration=2.904853033 podStartE2EDuration="5.5028841s" podCreationTimestamp="2026-01-22 11:23:20 +0000 UTC" firstStartedPulling="2026-01-22 11:23:22.43716829 +0000 UTC m=+3481.667111198" lastFinishedPulling="2026-01-22 11:23:25.035199357 +0000 UTC m=+3484.265142265" observedRunningTime="2026-01-22 11:23:25.494110756 +0000 UTC m=+3484.724053664" watchObservedRunningTime="2026-01-22 11:23:25.5028841 +0000 UTC m=+3484.732827008" Jan 22 11:23:31 crc kubenswrapper[4752]: I0122 11:23:31.311225 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qhl8s" Jan 22 11:23:31 crc kubenswrapper[4752]: I0122 11:23:31.311922 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qhl8s" Jan 22 11:23:31 crc kubenswrapper[4752]: I0122 11:23:31.375213 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qhl8s" Jan 22 11:23:31 crc kubenswrapper[4752]: I0122 11:23:31.594142 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qhl8s" Jan 22 11:23:31 crc kubenswrapper[4752]: I0122 11:23:31.644761 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qhl8s"] Jan 22 11:23:33 crc kubenswrapper[4752]: I0122 11:23:33.563361 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qhl8s" podUID="5ef79f13-df89-4269-8022-d0582474aec2" containerName="registry-server" containerID="cri-o://6e2f6a24e2fbb7633f128e3f0b3828460044311ee52baedf5fd7792142242308" gracePeriod=2 Jan 22 11:23:34 crc kubenswrapper[4752]: I0122 11:23:34.036839 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qhl8s" Jan 22 11:23:34 crc kubenswrapper[4752]: I0122 11:23:34.151700 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ef79f13-df89-4269-8022-d0582474aec2-utilities\") pod \"5ef79f13-df89-4269-8022-d0582474aec2\" (UID: \"5ef79f13-df89-4269-8022-d0582474aec2\") " Jan 22 11:23:34 crc kubenswrapper[4752]: I0122 11:23:34.151751 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ef79f13-df89-4269-8022-d0582474aec2-catalog-content\") pod \"5ef79f13-df89-4269-8022-d0582474aec2\" (UID: \"5ef79f13-df89-4269-8022-d0582474aec2\") " Jan 22 11:23:34 crc kubenswrapper[4752]: I0122 11:23:34.151961 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcx9j\" (UniqueName: \"kubernetes.io/projected/5ef79f13-df89-4269-8022-d0582474aec2-kube-api-access-qcx9j\") pod \"5ef79f13-df89-4269-8022-d0582474aec2\" (UID: \"5ef79f13-df89-4269-8022-d0582474aec2\") " Jan 22 11:23:34 crc kubenswrapper[4752]: I0122 11:23:34.158051 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ef79f13-df89-4269-8022-d0582474aec2-kube-api-access-qcx9j" (OuterVolumeSpecName: "kube-api-access-qcx9j") pod "5ef79f13-df89-4269-8022-d0582474aec2" (UID: "5ef79f13-df89-4269-8022-d0582474aec2"). InnerVolumeSpecName "kube-api-access-qcx9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:23:34 crc kubenswrapper[4752]: I0122 11:23:34.170660 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ef79f13-df89-4269-8022-d0582474aec2-utilities" (OuterVolumeSpecName: "utilities") pod "5ef79f13-df89-4269-8022-d0582474aec2" (UID: "5ef79f13-df89-4269-8022-d0582474aec2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:23:34 crc kubenswrapper[4752]: I0122 11:23:34.215513 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ef79f13-df89-4269-8022-d0582474aec2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ef79f13-df89-4269-8022-d0582474aec2" (UID: "5ef79f13-df89-4269-8022-d0582474aec2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:23:34 crc kubenswrapper[4752]: I0122 11:23:34.256912 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ef79f13-df89-4269-8022-d0582474aec2-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:23:34 crc kubenswrapper[4752]: I0122 11:23:34.257240 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ef79f13-df89-4269-8022-d0582474aec2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:23:34 crc kubenswrapper[4752]: I0122 11:23:34.257342 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcx9j\" (UniqueName: \"kubernetes.io/projected/5ef79f13-df89-4269-8022-d0582474aec2-kube-api-access-qcx9j\") on node \"crc\" DevicePath \"\"" Jan 22 11:23:34 crc kubenswrapper[4752]: I0122 11:23:34.575768 4752 generic.go:334] "Generic (PLEG): container finished" podID="5ef79f13-df89-4269-8022-d0582474aec2" containerID="6e2f6a24e2fbb7633f128e3f0b3828460044311ee52baedf5fd7792142242308" exitCode=0 Jan 22 11:23:34 crc kubenswrapper[4752]: I0122 11:23:34.575815 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qhl8s" event={"ID":"5ef79f13-df89-4269-8022-d0582474aec2","Type":"ContainerDied","Data":"6e2f6a24e2fbb7633f128e3f0b3828460044311ee52baedf5fd7792142242308"} Jan 22 11:23:34 crc kubenswrapper[4752]: I0122 11:23:34.575835 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qhl8s" Jan 22 11:23:34 crc kubenswrapper[4752]: I0122 11:23:34.575873 4752 scope.go:117] "RemoveContainer" containerID="6e2f6a24e2fbb7633f128e3f0b3828460044311ee52baedf5fd7792142242308" Jan 22 11:23:34 crc kubenswrapper[4752]: I0122 11:23:34.575843 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qhl8s" event={"ID":"5ef79f13-df89-4269-8022-d0582474aec2","Type":"ContainerDied","Data":"caca16368800f49d0ddbc58856f5967451cf01318a4b4061d10d7f06836ab7ed"} Jan 22 11:23:34 crc kubenswrapper[4752]: I0122 11:23:34.613903 4752 scope.go:117] "RemoveContainer" containerID="a6c87fc46434466b131db03a5f476164d7a0e6549ad2b1e6ada0f23298a0393e" Jan 22 11:23:34 crc kubenswrapper[4752]: I0122 11:23:34.614974 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qhl8s"] Jan 22 11:23:34 crc kubenswrapper[4752]: I0122 11:23:34.623888 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qhl8s"] Jan 22 11:23:34 crc kubenswrapper[4752]: I0122 11:23:34.652123 4752 scope.go:117] "RemoveContainer" containerID="50015c553c4d9b26bf7ea2ea2b73da94f42a085e41a1602c22d5241bb55ef733" Jan 22 11:23:34 crc kubenswrapper[4752]: I0122 11:23:34.692219 4752 scope.go:117] "RemoveContainer" containerID="6e2f6a24e2fbb7633f128e3f0b3828460044311ee52baedf5fd7792142242308" Jan 22 11:23:34 crc kubenswrapper[4752]: E0122 11:23:34.692713 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e2f6a24e2fbb7633f128e3f0b3828460044311ee52baedf5fd7792142242308\": container with ID starting with 6e2f6a24e2fbb7633f128e3f0b3828460044311ee52baedf5fd7792142242308 not found: ID does not exist" containerID="6e2f6a24e2fbb7633f128e3f0b3828460044311ee52baedf5fd7792142242308" Jan 22 11:23:34 crc kubenswrapper[4752]: I0122 11:23:34.692955 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e2f6a24e2fbb7633f128e3f0b3828460044311ee52baedf5fd7792142242308"} err="failed to get container status \"6e2f6a24e2fbb7633f128e3f0b3828460044311ee52baedf5fd7792142242308\": rpc error: code = NotFound desc = could not find container \"6e2f6a24e2fbb7633f128e3f0b3828460044311ee52baedf5fd7792142242308\": container with ID starting with 6e2f6a24e2fbb7633f128e3f0b3828460044311ee52baedf5fd7792142242308 not found: ID does not exist" Jan 22 11:23:34 crc kubenswrapper[4752]: I0122 11:23:34.693080 4752 scope.go:117] "RemoveContainer" containerID="a6c87fc46434466b131db03a5f476164d7a0e6549ad2b1e6ada0f23298a0393e" Jan 22 11:23:34 crc kubenswrapper[4752]: E0122 11:23:34.693608 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6c87fc46434466b131db03a5f476164d7a0e6549ad2b1e6ada0f23298a0393e\": container with ID starting with a6c87fc46434466b131db03a5f476164d7a0e6549ad2b1e6ada0f23298a0393e not found: ID does not exist" containerID="a6c87fc46434466b131db03a5f476164d7a0e6549ad2b1e6ada0f23298a0393e" Jan 22 11:23:34 crc kubenswrapper[4752]: I0122 11:23:34.693631 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6c87fc46434466b131db03a5f476164d7a0e6549ad2b1e6ada0f23298a0393e"} err="failed to get container status \"a6c87fc46434466b131db03a5f476164d7a0e6549ad2b1e6ada0f23298a0393e\": rpc error: code = NotFound desc = could not find container \"a6c87fc46434466b131db03a5f476164d7a0e6549ad2b1e6ada0f23298a0393e\": container with ID starting with a6c87fc46434466b131db03a5f476164d7a0e6549ad2b1e6ada0f23298a0393e not found: ID does not exist" Jan 22 11:23:34 crc kubenswrapper[4752]: I0122 11:23:34.693646 4752 scope.go:117] "RemoveContainer" containerID="50015c553c4d9b26bf7ea2ea2b73da94f42a085e41a1602c22d5241bb55ef733" Jan 22 11:23:34 crc kubenswrapper[4752]: E0122 11:23:34.693935 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50015c553c4d9b26bf7ea2ea2b73da94f42a085e41a1602c22d5241bb55ef733\": container with ID starting with 50015c553c4d9b26bf7ea2ea2b73da94f42a085e41a1602c22d5241bb55ef733 not found: ID does not exist" containerID="50015c553c4d9b26bf7ea2ea2b73da94f42a085e41a1602c22d5241bb55ef733" Jan 22 11:23:34 crc kubenswrapper[4752]: I0122 11:23:34.693953 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50015c553c4d9b26bf7ea2ea2b73da94f42a085e41a1602c22d5241bb55ef733"} err="failed to get container status \"50015c553c4d9b26bf7ea2ea2b73da94f42a085e41a1602c22d5241bb55ef733\": rpc error: code = NotFound desc = could not find container \"50015c553c4d9b26bf7ea2ea2b73da94f42a085e41a1602c22d5241bb55ef733\": container with ID starting with 50015c553c4d9b26bf7ea2ea2b73da94f42a085e41a1602c22d5241bb55ef733 not found: ID does not exist" Jan 22 11:23:35 crc kubenswrapper[4752]: I0122 11:23:35.117183 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ef79f13-df89-4269-8022-d0582474aec2" path="/var/lib/kubelet/pods/5ef79f13-df89-4269-8022-d0582474aec2/volumes" Jan 22 11:24:57 crc kubenswrapper[4752]: I0122 11:24:57.723738 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:24:57 crc kubenswrapper[4752]: I0122 11:24:57.724365 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:25:27 crc kubenswrapper[4752]: I0122 11:25:27.723975 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:25:27 crc kubenswrapper[4752]: I0122 11:25:27.724778 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:25:57 crc kubenswrapper[4752]: I0122 11:25:57.724125 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:25:57 crc kubenswrapper[4752]: I0122 11:25:57.724768 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:25:57 crc kubenswrapper[4752]: I0122 11:25:57.724828 4752 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 11:25:57 crc kubenswrapper[4752]: I0122 11:25:57.725988 4752 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bcb09f4b5e636fffb88830672f7b4d5c7bc907b18986d8a908ea6b3641114c5c"} pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 11:25:57 crc kubenswrapper[4752]: I0122 11:25:57.726079 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" containerID="cri-o://bcb09f4b5e636fffb88830672f7b4d5c7bc907b18986d8a908ea6b3641114c5c" gracePeriod=600 Jan 22 11:25:58 crc kubenswrapper[4752]: I0122 11:25:58.230998 4752 generic.go:334] "Generic (PLEG): container finished" podID="eb8df70c-9474-4827-8831-f39fc6883d79" containerID="bcb09f4b5e636fffb88830672f7b4d5c7bc907b18986d8a908ea6b3641114c5c" exitCode=0 Jan 22 11:25:58 crc kubenswrapper[4752]: I0122 11:25:58.231053 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerDied","Data":"bcb09f4b5e636fffb88830672f7b4d5c7bc907b18986d8a908ea6b3641114c5c"} Jan 22 11:25:58 crc kubenswrapper[4752]: I0122 11:25:58.231328 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerStarted","Data":"0a2ab070c16147a868fce7b4d4b9b4f8c7c8a200748a5e0630aaa36e65d64468"} Jan 22 11:25:58 crc kubenswrapper[4752]: I0122 11:25:58.231355 4752 scope.go:117] "RemoveContainer" containerID="4f53b6143a6540349d0b24f907f48d8480e7e4207509ebd2fe7296fdefe4ae8b" Jan 22 11:26:34 crc kubenswrapper[4752]: I0122 11:26:34.935460 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w9cdn"] Jan 22 11:26:34 crc kubenswrapper[4752]: E0122 11:26:34.936727 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ef79f13-df89-4269-8022-d0582474aec2" containerName="extract-content" Jan 22 11:26:34 crc kubenswrapper[4752]: I0122 11:26:34.936748 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ef79f13-df89-4269-8022-d0582474aec2" containerName="extract-content" Jan 22 11:26:34 crc kubenswrapper[4752]: E0122 11:26:34.936795 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ef79f13-df89-4269-8022-d0582474aec2" containerName="extract-utilities" Jan 22 11:26:34 crc kubenswrapper[4752]: I0122 11:26:34.936806 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ef79f13-df89-4269-8022-d0582474aec2" containerName="extract-utilities" Jan 22 11:26:34 crc kubenswrapper[4752]: E0122 11:26:34.936826 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ef79f13-df89-4269-8022-d0582474aec2" containerName="registry-server" Jan 22 11:26:34 crc kubenswrapper[4752]: I0122 11:26:34.936834 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ef79f13-df89-4269-8022-d0582474aec2" containerName="registry-server" Jan 22 11:26:34 crc kubenswrapper[4752]: I0122 11:26:34.937142 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ef79f13-df89-4269-8022-d0582474aec2" containerName="registry-server" Jan 22 11:26:34 crc kubenswrapper[4752]: I0122 11:26:34.939636 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w9cdn" Jan 22 11:26:34 crc kubenswrapper[4752]: I0122 11:26:34.948175 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w9cdn"] Jan 22 11:26:34 crc kubenswrapper[4752]: I0122 11:26:34.966227 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89352294-a433-402f-a946-c61475450595-catalog-content\") pod \"redhat-marketplace-w9cdn\" (UID: \"89352294-a433-402f-a946-c61475450595\") " pod="openshift-marketplace/redhat-marketplace-w9cdn" Jan 22 11:26:34 crc kubenswrapper[4752]: I0122 11:26:34.966299 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89352294-a433-402f-a946-c61475450595-utilities\") pod \"redhat-marketplace-w9cdn\" (UID: \"89352294-a433-402f-a946-c61475450595\") " pod="openshift-marketplace/redhat-marketplace-w9cdn" Jan 22 11:26:34 crc kubenswrapper[4752]: I0122 11:26:34.966397 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7567q\" (UniqueName: \"kubernetes.io/projected/89352294-a433-402f-a946-c61475450595-kube-api-access-7567q\") pod \"redhat-marketplace-w9cdn\" (UID: \"89352294-a433-402f-a946-c61475450595\") " pod="openshift-marketplace/redhat-marketplace-w9cdn" Jan 22 11:26:35 crc kubenswrapper[4752]: I0122 11:26:35.068605 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89352294-a433-402f-a946-c61475450595-catalog-content\") pod \"redhat-marketplace-w9cdn\" (UID: \"89352294-a433-402f-a946-c61475450595\") " pod="openshift-marketplace/redhat-marketplace-w9cdn" Jan 22 11:26:35 crc kubenswrapper[4752]: I0122 11:26:35.068711 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89352294-a433-402f-a946-c61475450595-utilities\") pod \"redhat-marketplace-w9cdn\" (UID: \"89352294-a433-402f-a946-c61475450595\") " pod="openshift-marketplace/redhat-marketplace-w9cdn" Jan 22 11:26:35 crc kubenswrapper[4752]: I0122 11:26:35.068831 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7567q\" (UniqueName: \"kubernetes.io/projected/89352294-a433-402f-a946-c61475450595-kube-api-access-7567q\") pod \"redhat-marketplace-w9cdn\" (UID: \"89352294-a433-402f-a946-c61475450595\") " pod="openshift-marketplace/redhat-marketplace-w9cdn" Jan 22 11:26:35 crc kubenswrapper[4752]: I0122 11:26:35.069170 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89352294-a433-402f-a946-c61475450595-catalog-content\") pod \"redhat-marketplace-w9cdn\" (UID: \"89352294-a433-402f-a946-c61475450595\") " pod="openshift-marketplace/redhat-marketplace-w9cdn" Jan 22 11:26:35 crc kubenswrapper[4752]: I0122 11:26:35.069424 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89352294-a433-402f-a946-c61475450595-utilities\") pod \"redhat-marketplace-w9cdn\" (UID: \"89352294-a433-402f-a946-c61475450595\") " pod="openshift-marketplace/redhat-marketplace-w9cdn" Jan 22 11:26:35 crc kubenswrapper[4752]: I0122 11:26:35.087389 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7567q\" (UniqueName: \"kubernetes.io/projected/89352294-a433-402f-a946-c61475450595-kube-api-access-7567q\") pod \"redhat-marketplace-w9cdn\" (UID: \"89352294-a433-402f-a946-c61475450595\") " pod="openshift-marketplace/redhat-marketplace-w9cdn" Jan 22 11:26:35 crc kubenswrapper[4752]: I0122 11:26:35.282453 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w9cdn" Jan 22 11:26:35 crc kubenswrapper[4752]: I0122 11:26:35.870087 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w9cdn"] Jan 22 11:26:36 crc kubenswrapper[4752]: I0122 11:26:36.655829 4752 generic.go:334] "Generic (PLEG): container finished" podID="89352294-a433-402f-a946-c61475450595" containerID="80fe229b01964834e1974cedd595fcb044a3d986b2da2477da554598e9c8768e" exitCode=0 Jan 22 11:26:36 crc kubenswrapper[4752]: I0122 11:26:36.655901 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9cdn" event={"ID":"89352294-a433-402f-a946-c61475450595","Type":"ContainerDied","Data":"80fe229b01964834e1974cedd595fcb044a3d986b2da2477da554598e9c8768e"} Jan 22 11:26:36 crc kubenswrapper[4752]: I0122 11:26:36.656168 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9cdn" event={"ID":"89352294-a433-402f-a946-c61475450595","Type":"ContainerStarted","Data":"381d5e41df045392aa3d021a597f881971417366ce17de07d876d57df1abc1c4"} Jan 22 11:26:37 crc kubenswrapper[4752]: I0122 11:26:37.666943 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9cdn" event={"ID":"89352294-a433-402f-a946-c61475450595","Type":"ContainerStarted","Data":"16b593e0c7cb52756ff85ac03913b2ef8a45e64925860689044899846f1955bc"} Jan 22 11:26:38 crc kubenswrapper[4752]: I0122 11:26:38.681246 4752 generic.go:334] "Generic (PLEG): container finished" podID="89352294-a433-402f-a946-c61475450595" containerID="16b593e0c7cb52756ff85ac03913b2ef8a45e64925860689044899846f1955bc" exitCode=0 Jan 22 11:26:38 crc kubenswrapper[4752]: I0122 11:26:38.681362 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9cdn" event={"ID":"89352294-a433-402f-a946-c61475450595","Type":"ContainerDied","Data":"16b593e0c7cb52756ff85ac03913b2ef8a45e64925860689044899846f1955bc"} Jan 22 11:26:39 crc kubenswrapper[4752]: I0122 11:26:39.693815 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9cdn" event={"ID":"89352294-a433-402f-a946-c61475450595","Type":"ContainerStarted","Data":"50447687c0b998fb0af3c12e35b3c0dbf5c21450a6ae750da3261c56efe6ef7f"} Jan 22 11:26:39 crc kubenswrapper[4752]: I0122 11:26:39.719421 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w9cdn" podStartSLOduration=3.313691193 podStartE2EDuration="5.71940621s" podCreationTimestamp="2026-01-22 11:26:34 +0000 UTC" firstStartedPulling="2026-01-22 11:26:36.658242601 +0000 UTC m=+3675.888185549" lastFinishedPulling="2026-01-22 11:26:39.063957648 +0000 UTC m=+3678.293900566" observedRunningTime="2026-01-22 11:26:39.716298039 +0000 UTC m=+3678.946241027" watchObservedRunningTime="2026-01-22 11:26:39.71940621 +0000 UTC m=+3678.949349118" Jan 22 11:26:45 crc kubenswrapper[4752]: I0122 11:26:45.282608 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w9cdn" Jan 22 11:26:45 crc kubenswrapper[4752]: I0122 11:26:45.283282 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w9cdn" Jan 22 11:26:45 crc kubenswrapper[4752]: I0122 11:26:45.346264 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w9cdn" Jan 22 11:26:45 crc kubenswrapper[4752]: I0122 11:26:45.817492 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w9cdn" Jan 22 11:26:45 crc kubenswrapper[4752]: I0122 11:26:45.883151 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w9cdn"] Jan 22 11:26:47 crc kubenswrapper[4752]: I0122 11:26:47.777148 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w9cdn" podUID="89352294-a433-402f-a946-c61475450595" containerName="registry-server" containerID="cri-o://50447687c0b998fb0af3c12e35b3c0dbf5c21450a6ae750da3261c56efe6ef7f" gracePeriod=2 Jan 22 11:26:48 crc kubenswrapper[4752]: I0122 11:26:48.724024 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w9cdn" Jan 22 11:26:48 crc kubenswrapper[4752]: I0122 11:26:48.784336 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89352294-a433-402f-a946-c61475450595-utilities\") pod \"89352294-a433-402f-a946-c61475450595\" (UID: \"89352294-a433-402f-a946-c61475450595\") " Jan 22 11:26:48 crc kubenswrapper[4752]: I0122 11:26:48.784411 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89352294-a433-402f-a946-c61475450595-catalog-content\") pod \"89352294-a433-402f-a946-c61475450595\" (UID: \"89352294-a433-402f-a946-c61475450595\") " Jan 22 11:26:48 crc kubenswrapper[4752]: I0122 11:26:48.784447 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7567q\" (UniqueName: \"kubernetes.io/projected/89352294-a433-402f-a946-c61475450595-kube-api-access-7567q\") pod \"89352294-a433-402f-a946-c61475450595\" (UID: \"89352294-a433-402f-a946-c61475450595\") " Jan 22 11:26:48 crc kubenswrapper[4752]: I0122 11:26:48.785349 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89352294-a433-402f-a946-c61475450595-utilities" (OuterVolumeSpecName: "utilities") pod "89352294-a433-402f-a946-c61475450595" (UID: "89352294-a433-402f-a946-c61475450595"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:26:48 crc kubenswrapper[4752]: I0122 11:26:48.790770 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89352294-a433-402f-a946-c61475450595-kube-api-access-7567q" (OuterVolumeSpecName: "kube-api-access-7567q") pod "89352294-a433-402f-a946-c61475450595" (UID: "89352294-a433-402f-a946-c61475450595"). InnerVolumeSpecName "kube-api-access-7567q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:26:48 crc kubenswrapper[4752]: I0122 11:26:48.791269 4752 generic.go:334] "Generic (PLEG): container finished" podID="89352294-a433-402f-a946-c61475450595" containerID="50447687c0b998fb0af3c12e35b3c0dbf5c21450a6ae750da3261c56efe6ef7f" exitCode=0 Jan 22 11:26:48 crc kubenswrapper[4752]: I0122 11:26:48.791312 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9cdn" event={"ID":"89352294-a433-402f-a946-c61475450595","Type":"ContainerDied","Data":"50447687c0b998fb0af3c12e35b3c0dbf5c21450a6ae750da3261c56efe6ef7f"} Jan 22 11:26:48 crc kubenswrapper[4752]: I0122 11:26:48.791341 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9cdn" event={"ID":"89352294-a433-402f-a946-c61475450595","Type":"ContainerDied","Data":"381d5e41df045392aa3d021a597f881971417366ce17de07d876d57df1abc1c4"} Jan 22 11:26:48 crc kubenswrapper[4752]: I0122 11:26:48.791363 4752 scope.go:117] "RemoveContainer" containerID="50447687c0b998fb0af3c12e35b3c0dbf5c21450a6ae750da3261c56efe6ef7f" Jan 22 11:26:48 crc kubenswrapper[4752]: I0122 11:26:48.791524 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w9cdn" Jan 22 11:26:48 crc kubenswrapper[4752]: I0122 11:26:48.807992 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89352294-a433-402f-a946-c61475450595-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89352294-a433-402f-a946-c61475450595" (UID: "89352294-a433-402f-a946-c61475450595"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:26:48 crc kubenswrapper[4752]: I0122 11:26:48.857106 4752 scope.go:117] "RemoveContainer" containerID="16b593e0c7cb52756ff85ac03913b2ef8a45e64925860689044899846f1955bc" Jan 22 11:26:48 crc kubenswrapper[4752]: I0122 11:26:48.880220 4752 scope.go:117] "RemoveContainer" containerID="80fe229b01964834e1974cedd595fcb044a3d986b2da2477da554598e9c8768e" Jan 22 11:26:48 crc kubenswrapper[4752]: I0122 11:26:48.886375 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89352294-a433-402f-a946-c61475450595-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:26:48 crc kubenswrapper[4752]: I0122 11:26:48.886413 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89352294-a433-402f-a946-c61475450595-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:26:48 crc kubenswrapper[4752]: I0122 11:26:48.886423 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7567q\" (UniqueName: \"kubernetes.io/projected/89352294-a433-402f-a946-c61475450595-kube-api-access-7567q\") on node \"crc\" DevicePath \"\"" Jan 22 11:26:48 crc kubenswrapper[4752]: I0122 11:26:48.927700 4752 scope.go:117] "RemoveContainer" containerID="50447687c0b998fb0af3c12e35b3c0dbf5c21450a6ae750da3261c56efe6ef7f" Jan 22 11:26:48 crc kubenswrapper[4752]: E0122 11:26:48.928262 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50447687c0b998fb0af3c12e35b3c0dbf5c21450a6ae750da3261c56efe6ef7f\": container with ID starting with 50447687c0b998fb0af3c12e35b3c0dbf5c21450a6ae750da3261c56efe6ef7f not found: ID does not exist" containerID="50447687c0b998fb0af3c12e35b3c0dbf5c21450a6ae750da3261c56efe6ef7f" Jan 22 11:26:48 crc kubenswrapper[4752]: I0122 11:26:48.928310 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50447687c0b998fb0af3c12e35b3c0dbf5c21450a6ae750da3261c56efe6ef7f"} err="failed to get container status \"50447687c0b998fb0af3c12e35b3c0dbf5c21450a6ae750da3261c56efe6ef7f\": rpc error: code = NotFound desc = could not find container \"50447687c0b998fb0af3c12e35b3c0dbf5c21450a6ae750da3261c56efe6ef7f\": container with ID starting with 50447687c0b998fb0af3c12e35b3c0dbf5c21450a6ae750da3261c56efe6ef7f not found: ID does not exist" Jan 22 11:26:48 crc kubenswrapper[4752]: I0122 11:26:48.928354 4752 scope.go:117] "RemoveContainer" containerID="16b593e0c7cb52756ff85ac03913b2ef8a45e64925860689044899846f1955bc" Jan 22 11:26:48 crc kubenswrapper[4752]: E0122 11:26:48.928822 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16b593e0c7cb52756ff85ac03913b2ef8a45e64925860689044899846f1955bc\": container with ID starting with 16b593e0c7cb52756ff85ac03913b2ef8a45e64925860689044899846f1955bc not found: ID does not exist" containerID="16b593e0c7cb52756ff85ac03913b2ef8a45e64925860689044899846f1955bc" Jan 22 11:26:48 crc kubenswrapper[4752]: I0122 11:26:48.928863 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16b593e0c7cb52756ff85ac03913b2ef8a45e64925860689044899846f1955bc"} err="failed to get container status \"16b593e0c7cb52756ff85ac03913b2ef8a45e64925860689044899846f1955bc\": rpc error: code = NotFound desc = could not find container \"16b593e0c7cb52756ff85ac03913b2ef8a45e64925860689044899846f1955bc\": container with ID starting with 16b593e0c7cb52756ff85ac03913b2ef8a45e64925860689044899846f1955bc not found: ID does not exist" Jan 22 11:26:48 crc kubenswrapper[4752]: I0122 11:26:48.928884 4752 scope.go:117] "RemoveContainer" containerID="80fe229b01964834e1974cedd595fcb044a3d986b2da2477da554598e9c8768e" Jan 22 11:26:48 crc kubenswrapper[4752]: E0122 11:26:48.929466 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80fe229b01964834e1974cedd595fcb044a3d986b2da2477da554598e9c8768e\": container with ID starting with 80fe229b01964834e1974cedd595fcb044a3d986b2da2477da554598e9c8768e not found: ID does not exist" containerID="80fe229b01964834e1974cedd595fcb044a3d986b2da2477da554598e9c8768e" Jan 22 11:26:48 crc kubenswrapper[4752]: I0122 11:26:48.929520 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80fe229b01964834e1974cedd595fcb044a3d986b2da2477da554598e9c8768e"} err="failed to get container status \"80fe229b01964834e1974cedd595fcb044a3d986b2da2477da554598e9c8768e\": rpc error: code = NotFound desc = could not find container \"80fe229b01964834e1974cedd595fcb044a3d986b2da2477da554598e9c8768e\": container with ID starting with 80fe229b01964834e1974cedd595fcb044a3d986b2da2477da554598e9c8768e not found: ID does not exist" Jan 22 11:26:49 crc kubenswrapper[4752]: I0122 11:26:49.134350 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w9cdn"] Jan 22 11:26:49 crc kubenswrapper[4752]: I0122 11:26:49.146694 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w9cdn"] Jan 22 11:26:51 crc kubenswrapper[4752]: I0122 11:26:51.115448 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89352294-a433-402f-a946-c61475450595" path="/var/lib/kubelet/pods/89352294-a433-402f-a946-c61475450595/volumes" Jan 22 11:28:27 crc kubenswrapper[4752]: I0122 11:28:27.723688 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:28:27 crc kubenswrapper[4752]: I0122 11:28:27.724355 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:28:57 crc kubenswrapper[4752]: I0122 11:28:57.723439 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:28:57 crc kubenswrapper[4752]: I0122 11:28:57.724034 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:29:27 crc kubenswrapper[4752]: I0122 11:29:27.724109 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:29:27 crc kubenswrapper[4752]: I0122 11:29:27.725077 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:29:27 crc kubenswrapper[4752]: I0122 11:29:27.725171 4752 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 11:29:27 crc kubenswrapper[4752]: I0122 11:29:27.726565 4752 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0a2ab070c16147a868fce7b4d4b9b4f8c7c8a200748a5e0630aaa36e65d64468"} pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 11:29:27 crc kubenswrapper[4752]: I0122 11:29:27.726705 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" containerID="cri-o://0a2ab070c16147a868fce7b4d4b9b4f8c7c8a200748a5e0630aaa36e65d64468" gracePeriod=600 Jan 22 11:29:27 crc kubenswrapper[4752]: E0122 11:29:27.885127 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:29:28 crc kubenswrapper[4752]: I0122 11:29:28.476480 4752 generic.go:334] "Generic (PLEG): container finished" podID="eb8df70c-9474-4827-8831-f39fc6883d79" containerID="0a2ab070c16147a868fce7b4d4b9b4f8c7c8a200748a5e0630aaa36e65d64468" exitCode=0 Jan 22 11:29:28 crc kubenswrapper[4752]: I0122 11:29:28.476586 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerDied","Data":"0a2ab070c16147a868fce7b4d4b9b4f8c7c8a200748a5e0630aaa36e65d64468"} Jan 22 11:29:28 crc kubenswrapper[4752]: I0122 11:29:28.476790 4752 scope.go:117] "RemoveContainer" containerID="bcb09f4b5e636fffb88830672f7b4d5c7bc907b18986d8a908ea6b3641114c5c" Jan 22 11:29:28 crc kubenswrapper[4752]: I0122 11:29:28.477660 4752 scope.go:117] "RemoveContainer" containerID="0a2ab070c16147a868fce7b4d4b9b4f8c7c8a200748a5e0630aaa36e65d64468" Jan 22 11:29:28 crc kubenswrapper[4752]: E0122 11:29:28.478286 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:29:43 crc kubenswrapper[4752]: I0122 11:29:43.098102 4752 scope.go:117] "RemoveContainer" containerID="0a2ab070c16147a868fce7b4d4b9b4f8c7c8a200748a5e0630aaa36e65d64468" Jan 22 11:29:43 crc kubenswrapper[4752]: E0122 11:29:43.099246 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:29:57 crc kubenswrapper[4752]: I0122 11:29:57.098545 4752 scope.go:117] "RemoveContainer" containerID="0a2ab070c16147a868fce7b4d4b9b4f8c7c8a200748a5e0630aaa36e65d64468" Jan 22 11:29:57 crc kubenswrapper[4752]: E0122 11:29:57.099452 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:30:00 crc kubenswrapper[4752]: I0122 11:30:00.184323 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484690-vlj44"] Jan 22 11:30:00 crc kubenswrapper[4752]: E0122 11:30:00.185695 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89352294-a433-402f-a946-c61475450595" containerName="extract-content" Jan 22 11:30:00 crc kubenswrapper[4752]: I0122 11:30:00.185717 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="89352294-a433-402f-a946-c61475450595" containerName="extract-content" Jan 22 11:30:00 crc kubenswrapper[4752]: E0122 11:30:00.185730 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89352294-a433-402f-a946-c61475450595" containerName="registry-server" Jan 22 11:30:00 crc kubenswrapper[4752]: I0122 11:30:00.185737 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="89352294-a433-402f-a946-c61475450595" containerName="registry-server" Jan 22 11:30:00 crc kubenswrapper[4752]: E0122 11:30:00.185771 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89352294-a433-402f-a946-c61475450595" containerName="extract-utilities" Jan 22 11:30:00 crc kubenswrapper[4752]: I0122 11:30:00.185780 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="89352294-a433-402f-a946-c61475450595" containerName="extract-utilities" Jan 22 11:30:00 crc kubenswrapper[4752]: I0122 11:30:00.186084 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="89352294-a433-402f-a946-c61475450595" containerName="registry-server" Jan 22 11:30:00 crc kubenswrapper[4752]: I0122 11:30:00.187837 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-vlj44" Jan 22 11:30:00 crc kubenswrapper[4752]: I0122 11:30:00.191006 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 11:30:00 crc kubenswrapper[4752]: I0122 11:30:00.191680 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 11:30:00 crc kubenswrapper[4752]: I0122 11:30:00.201829 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484690-vlj44"] Jan 22 11:30:00 crc kubenswrapper[4752]: I0122 11:30:00.265681 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de46e919-7db3-4f81-af16-9df1b2e1e114-secret-volume\") pod \"collect-profiles-29484690-vlj44\" (UID: \"de46e919-7db3-4f81-af16-9df1b2e1e114\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-vlj44" Jan 22 11:30:00 crc kubenswrapper[4752]: I0122 11:30:00.265825 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de46e919-7db3-4f81-af16-9df1b2e1e114-config-volume\") pod \"collect-profiles-29484690-vlj44\" (UID: \"de46e919-7db3-4f81-af16-9df1b2e1e114\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-vlj44" Jan 22 11:30:00 crc kubenswrapper[4752]: I0122 11:30:00.266105 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7wb8\" (UniqueName: \"kubernetes.io/projected/de46e919-7db3-4f81-af16-9df1b2e1e114-kube-api-access-m7wb8\") pod \"collect-profiles-29484690-vlj44\" (UID: \"de46e919-7db3-4f81-af16-9df1b2e1e114\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-vlj44" Jan 22 11:30:00 crc kubenswrapper[4752]: I0122 11:30:00.370354 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7wb8\" (UniqueName: \"kubernetes.io/projected/de46e919-7db3-4f81-af16-9df1b2e1e114-kube-api-access-m7wb8\") pod \"collect-profiles-29484690-vlj44\" (UID: \"de46e919-7db3-4f81-af16-9df1b2e1e114\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-vlj44" Jan 22 11:30:00 crc kubenswrapper[4752]: I0122 11:30:00.370593 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de46e919-7db3-4f81-af16-9df1b2e1e114-secret-volume\") pod \"collect-profiles-29484690-vlj44\" (UID: \"de46e919-7db3-4f81-af16-9df1b2e1e114\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-vlj44" Jan 22 11:30:00 crc kubenswrapper[4752]: I0122 11:30:00.370651 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de46e919-7db3-4f81-af16-9df1b2e1e114-config-volume\") pod \"collect-profiles-29484690-vlj44\" (UID: \"de46e919-7db3-4f81-af16-9df1b2e1e114\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-vlj44" Jan 22 11:30:00 crc kubenswrapper[4752]: I0122 11:30:00.372112 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de46e919-7db3-4f81-af16-9df1b2e1e114-config-volume\") pod \"collect-profiles-29484690-vlj44\" (UID: \"de46e919-7db3-4f81-af16-9df1b2e1e114\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-vlj44" Jan 22 11:30:00 crc kubenswrapper[4752]: I0122 11:30:00.388951 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de46e919-7db3-4f81-af16-9df1b2e1e114-secret-volume\") pod \"collect-profiles-29484690-vlj44\" (UID: \"de46e919-7db3-4f81-af16-9df1b2e1e114\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-vlj44" Jan 22 11:30:00 crc kubenswrapper[4752]: I0122 11:30:00.394812 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7wb8\" (UniqueName: \"kubernetes.io/projected/de46e919-7db3-4f81-af16-9df1b2e1e114-kube-api-access-m7wb8\") pod \"collect-profiles-29484690-vlj44\" (UID: \"de46e919-7db3-4f81-af16-9df1b2e1e114\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-vlj44" Jan 22 11:30:00 crc kubenswrapper[4752]: I0122 11:30:00.517359 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-vlj44" Jan 22 11:30:01 crc kubenswrapper[4752]: I0122 11:30:01.048030 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484690-vlj44"] Jan 22 11:30:01 crc kubenswrapper[4752]: I0122 11:30:01.812388 4752 generic.go:334] "Generic (PLEG): container finished" podID="de46e919-7db3-4f81-af16-9df1b2e1e114" containerID="9f4be302563835280b8e4a49a87fa03de8ad8752a031de3064e436094fa2f60d" exitCode=0 Jan 22 11:30:01 crc kubenswrapper[4752]: I0122 11:30:01.812701 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-vlj44" event={"ID":"de46e919-7db3-4f81-af16-9df1b2e1e114","Type":"ContainerDied","Data":"9f4be302563835280b8e4a49a87fa03de8ad8752a031de3064e436094fa2f60d"} Jan 22 11:30:01 crc kubenswrapper[4752]: I0122 11:30:01.812735 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-vlj44" event={"ID":"de46e919-7db3-4f81-af16-9df1b2e1e114","Type":"ContainerStarted","Data":"823c266bcae1011696286bb5ba588058db9a1fed668ab2e08cbc57d245c27509"} Jan 22 11:30:03 crc kubenswrapper[4752]: I0122 11:30:03.246929 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-vlj44" Jan 22 11:30:03 crc kubenswrapper[4752]: I0122 11:30:03.343733 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7wb8\" (UniqueName: \"kubernetes.io/projected/de46e919-7db3-4f81-af16-9df1b2e1e114-kube-api-access-m7wb8\") pod \"de46e919-7db3-4f81-af16-9df1b2e1e114\" (UID: \"de46e919-7db3-4f81-af16-9df1b2e1e114\") " Jan 22 11:30:03 crc kubenswrapper[4752]: I0122 11:30:03.343834 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de46e919-7db3-4f81-af16-9df1b2e1e114-config-volume\") pod \"de46e919-7db3-4f81-af16-9df1b2e1e114\" (UID: \"de46e919-7db3-4f81-af16-9df1b2e1e114\") " Jan 22 11:30:03 crc kubenswrapper[4752]: I0122 11:30:03.343919 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de46e919-7db3-4f81-af16-9df1b2e1e114-secret-volume\") pod \"de46e919-7db3-4f81-af16-9df1b2e1e114\" (UID: \"de46e919-7db3-4f81-af16-9df1b2e1e114\") " Jan 22 11:30:03 crc kubenswrapper[4752]: I0122 11:30:03.346887 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de46e919-7db3-4f81-af16-9df1b2e1e114-config-volume" (OuterVolumeSpecName: "config-volume") pod "de46e919-7db3-4f81-af16-9df1b2e1e114" (UID: "de46e919-7db3-4f81-af16-9df1b2e1e114"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:30:03 crc kubenswrapper[4752]: I0122 11:30:03.360294 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de46e919-7db3-4f81-af16-9df1b2e1e114-kube-api-access-m7wb8" (OuterVolumeSpecName: "kube-api-access-m7wb8") pod "de46e919-7db3-4f81-af16-9df1b2e1e114" (UID: "de46e919-7db3-4f81-af16-9df1b2e1e114"). InnerVolumeSpecName "kube-api-access-m7wb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:30:03 crc kubenswrapper[4752]: I0122 11:30:03.360443 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de46e919-7db3-4f81-af16-9df1b2e1e114-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "de46e919-7db3-4f81-af16-9df1b2e1e114" (UID: "de46e919-7db3-4f81-af16-9df1b2e1e114"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:30:03 crc kubenswrapper[4752]: I0122 11:30:03.447344 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7wb8\" (UniqueName: \"kubernetes.io/projected/de46e919-7db3-4f81-af16-9df1b2e1e114-kube-api-access-m7wb8\") on node \"crc\" DevicePath \"\"" Jan 22 11:30:03 crc kubenswrapper[4752]: I0122 11:30:03.447414 4752 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de46e919-7db3-4f81-af16-9df1b2e1e114-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 11:30:03 crc kubenswrapper[4752]: I0122 11:30:03.447429 4752 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de46e919-7db3-4f81-af16-9df1b2e1e114-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 11:30:03 crc kubenswrapper[4752]: I0122 11:30:03.837344 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-vlj44" event={"ID":"de46e919-7db3-4f81-af16-9df1b2e1e114","Type":"ContainerDied","Data":"823c266bcae1011696286bb5ba588058db9a1fed668ab2e08cbc57d245c27509"} Jan 22 11:30:03 crc kubenswrapper[4752]: I0122 11:30:03.837382 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="823c266bcae1011696286bb5ba588058db9a1fed668ab2e08cbc57d245c27509" Jan 22 11:30:03 crc kubenswrapper[4752]: I0122 11:30:03.837401 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-vlj44" Jan 22 11:30:04 crc kubenswrapper[4752]: I0122 11:30:04.329443 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484645-zs8tk"] Jan 22 11:30:04 crc kubenswrapper[4752]: I0122 11:30:04.342667 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484645-zs8tk"] Jan 22 11:30:05 crc kubenswrapper[4752]: I0122 11:30:05.108673 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b09c1a7-b13b-45f2-908c-9f3459653f0e" path="/var/lib/kubelet/pods/3b09c1a7-b13b-45f2-908c-9f3459653f0e/volumes" Jan 22 11:30:11 crc kubenswrapper[4752]: I0122 11:30:11.103712 4752 scope.go:117] "RemoveContainer" containerID="0a2ab070c16147a868fce7b4d4b9b4f8c7c8a200748a5e0630aaa36e65d64468" Jan 22 11:30:11 crc kubenswrapper[4752]: E0122 11:30:11.104439 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:30:11 crc kubenswrapper[4752]: I0122 11:30:11.726517 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-skbgn"] Jan 22 11:30:11 crc kubenswrapper[4752]: E0122 11:30:11.727493 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de46e919-7db3-4f81-af16-9df1b2e1e114" containerName="collect-profiles" Jan 22 11:30:11 crc kubenswrapper[4752]: I0122 11:30:11.727515 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="de46e919-7db3-4f81-af16-9df1b2e1e114" containerName="collect-profiles" Jan 22 11:30:11 crc kubenswrapper[4752]: I0122 11:30:11.727777 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="de46e919-7db3-4f81-af16-9df1b2e1e114" containerName="collect-profiles" Jan 22 11:30:11 crc kubenswrapper[4752]: I0122 11:30:11.729694 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-skbgn" Jan 22 11:30:11 crc kubenswrapper[4752]: I0122 11:30:11.740565 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-skbgn"] Jan 22 11:30:11 crc kubenswrapper[4752]: I0122 11:30:11.835105 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca-catalog-content\") pod \"certified-operators-skbgn\" (UID: \"53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca\") " pod="openshift-marketplace/certified-operators-skbgn" Jan 22 11:30:11 crc kubenswrapper[4752]: I0122 11:30:11.835362 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca-utilities\") pod \"certified-operators-skbgn\" (UID: \"53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca\") " pod="openshift-marketplace/certified-operators-skbgn" Jan 22 11:30:11 crc kubenswrapper[4752]: I0122 11:30:11.835488 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzmp9\" (UniqueName: \"kubernetes.io/projected/53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca-kube-api-access-gzmp9\") pod \"certified-operators-skbgn\" (UID: \"53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca\") " pod="openshift-marketplace/certified-operators-skbgn" Jan 22 11:30:11 crc kubenswrapper[4752]: I0122 11:30:11.939091 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca-utilities\") pod \"certified-operators-skbgn\" (UID: \"53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca\") " pod="openshift-marketplace/certified-operators-skbgn" Jan 22 11:30:11 crc kubenswrapper[4752]: I0122 11:30:11.939564 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca-utilities\") pod \"certified-operators-skbgn\" (UID: \"53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca\") " pod="openshift-marketplace/certified-operators-skbgn" Jan 22 11:30:11 crc kubenswrapper[4752]: I0122 11:30:11.939987 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzmp9\" (UniqueName: \"kubernetes.io/projected/53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca-kube-api-access-gzmp9\") pod \"certified-operators-skbgn\" (UID: \"53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca\") " pod="openshift-marketplace/certified-operators-skbgn" Jan 22 11:30:11 crc kubenswrapper[4752]: I0122 11:30:11.940522 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca-catalog-content\") pod \"certified-operators-skbgn\" (UID: \"53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca\") " pod="openshift-marketplace/certified-operators-skbgn" Jan 22 11:30:11 crc kubenswrapper[4752]: I0122 11:30:11.940824 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca-catalog-content\") pod \"certified-operators-skbgn\" (UID: \"53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca\") " pod="openshift-marketplace/certified-operators-skbgn" Jan 22 11:30:11 crc kubenswrapper[4752]: I0122 11:30:11.968981 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzmp9\" (UniqueName: \"kubernetes.io/projected/53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca-kube-api-access-gzmp9\") pod \"certified-operators-skbgn\" (UID: \"53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca\") " pod="openshift-marketplace/certified-operators-skbgn" Jan 22 11:30:12 crc kubenswrapper[4752]: I0122 11:30:12.064821 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-skbgn" Jan 22 11:30:12 crc kubenswrapper[4752]: I0122 11:30:12.639096 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-skbgn"] Jan 22 11:30:12 crc kubenswrapper[4752]: I0122 11:30:12.936624 4752 generic.go:334] "Generic (PLEG): container finished" podID="53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca" containerID="56ed57d5686dbe15bf20a036e2aa00f859a58a265355158ed1697adedefeb124" exitCode=0 Jan 22 11:30:12 crc kubenswrapper[4752]: I0122 11:30:12.936680 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-skbgn" event={"ID":"53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca","Type":"ContainerDied","Data":"56ed57d5686dbe15bf20a036e2aa00f859a58a265355158ed1697adedefeb124"} Jan 22 11:30:12 crc kubenswrapper[4752]: I0122 11:30:12.936721 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-skbgn" event={"ID":"53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca","Type":"ContainerStarted","Data":"b36d5a3e2381cd042b0ea973aae1d90f8e047f85fd9082362c080ffe951c3900"} Jan 22 11:30:12 crc kubenswrapper[4752]: I0122 11:30:12.939909 4752 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 11:30:14 crc kubenswrapper[4752]: I0122 11:30:14.959506 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-skbgn" event={"ID":"53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca","Type":"ContainerStarted","Data":"74addac4eceec6e31b1585d5f37b2c70d43a564cc343490659b850be5641751f"} Jan 22 11:30:15 crc kubenswrapper[4752]: I0122 11:30:15.975308 4752 generic.go:334] "Generic (PLEG): container finished" podID="53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca" containerID="74addac4eceec6e31b1585d5f37b2c70d43a564cc343490659b850be5641751f" exitCode=0 Jan 22 11:30:15 crc kubenswrapper[4752]: I0122 11:30:15.975368 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-skbgn" event={"ID":"53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca","Type":"ContainerDied","Data":"74addac4eceec6e31b1585d5f37b2c70d43a564cc343490659b850be5641751f"} Jan 22 11:30:16 crc kubenswrapper[4752]: I0122 11:30:16.988004 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-skbgn" event={"ID":"53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca","Type":"ContainerStarted","Data":"67396a157c52b5b20a86b5c2f10a4519ca87277aa94f2637084ab94e0b9deb47"} Jan 22 11:30:17 crc kubenswrapper[4752]: I0122 11:30:17.013194 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-skbgn" podStartSLOduration=2.585019622 podStartE2EDuration="6.013166858s" podCreationTimestamp="2026-01-22 11:30:11 +0000 UTC" firstStartedPulling="2026-01-22 11:30:12.939479196 +0000 UTC m=+3892.169422114" lastFinishedPulling="2026-01-22 11:30:16.367626442 +0000 UTC m=+3895.597569350" observedRunningTime="2026-01-22 11:30:17.008922487 +0000 UTC m=+3896.238865395" watchObservedRunningTime="2026-01-22 11:30:17.013166858 +0000 UTC m=+3896.243109766" Jan 22 11:30:22 crc kubenswrapper[4752]: I0122 11:30:22.066140 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-skbgn" Jan 22 11:30:22 crc kubenswrapper[4752]: I0122 11:30:22.066975 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-skbgn" Jan 22 11:30:22 crc kubenswrapper[4752]: I0122 11:30:22.114695 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-skbgn" Jan 22 11:30:23 crc kubenswrapper[4752]: I0122 11:30:23.112333 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-skbgn" Jan 22 11:30:23 crc kubenswrapper[4752]: I0122 11:30:23.168484 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-skbgn"] Jan 22 11:30:25 crc kubenswrapper[4752]: I0122 11:30:25.066186 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-skbgn" podUID="53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca" containerName="registry-server" containerID="cri-o://67396a157c52b5b20a86b5c2f10a4519ca87277aa94f2637084ab94e0b9deb47" gracePeriod=2 Jan 22 11:30:25 crc kubenswrapper[4752]: I0122 11:30:25.680081 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-skbgn" Jan 22 11:30:25 crc kubenswrapper[4752]: I0122 11:30:25.791322 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzmp9\" (UniqueName: \"kubernetes.io/projected/53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca-kube-api-access-gzmp9\") pod \"53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca\" (UID: \"53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca\") " Jan 22 11:30:25 crc kubenswrapper[4752]: I0122 11:30:25.791514 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca-utilities\") pod \"53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca\" (UID: \"53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca\") " Jan 22 11:30:25 crc kubenswrapper[4752]: I0122 11:30:25.791551 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca-catalog-content\") pod \"53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca\" (UID: \"53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca\") " Jan 22 11:30:25 crc kubenswrapper[4752]: I0122 11:30:25.792454 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca-utilities" (OuterVolumeSpecName: "utilities") pod "53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca" (UID: "53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:30:25 crc kubenswrapper[4752]: I0122 11:30:25.801422 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca-kube-api-access-gzmp9" (OuterVolumeSpecName: "kube-api-access-gzmp9") pod "53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca" (UID: "53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca"). InnerVolumeSpecName "kube-api-access-gzmp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:30:25 crc kubenswrapper[4752]: I0122 11:30:25.837255 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca" (UID: "53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:30:25 crc kubenswrapper[4752]: I0122 11:30:25.894237 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzmp9\" (UniqueName: \"kubernetes.io/projected/53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca-kube-api-access-gzmp9\") on node \"crc\" DevicePath \"\"" Jan 22 11:30:25 crc kubenswrapper[4752]: I0122 11:30:25.894294 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:30:25 crc kubenswrapper[4752]: I0122 11:30:25.894309 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:30:26 crc kubenswrapper[4752]: I0122 11:30:26.079024 4752 generic.go:334] "Generic (PLEG): container finished" podID="53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca" containerID="67396a157c52b5b20a86b5c2f10a4519ca87277aa94f2637084ab94e0b9deb47" exitCode=0 Jan 22 11:30:26 crc kubenswrapper[4752]: I0122 11:30:26.079068 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-skbgn" event={"ID":"53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca","Type":"ContainerDied","Data":"67396a157c52b5b20a86b5c2f10a4519ca87277aa94f2637084ab94e0b9deb47"} Jan 22 11:30:26 crc kubenswrapper[4752]: I0122 11:30:26.079094 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-skbgn" event={"ID":"53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca","Type":"ContainerDied","Data":"b36d5a3e2381cd042b0ea973aae1d90f8e047f85fd9082362c080ffe951c3900"} Jan 22 11:30:26 crc kubenswrapper[4752]: I0122 11:30:26.079112 4752 scope.go:117] "RemoveContainer" containerID="67396a157c52b5b20a86b5c2f10a4519ca87277aa94f2637084ab94e0b9deb47" Jan 22 11:30:26 crc kubenswrapper[4752]: I0122 11:30:26.079240 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-skbgn" Jan 22 11:30:26 crc kubenswrapper[4752]: I0122 11:30:26.098123 4752 scope.go:117] "RemoveContainer" containerID="0a2ab070c16147a868fce7b4d4b9b4f8c7c8a200748a5e0630aaa36e65d64468" Jan 22 11:30:26 crc kubenswrapper[4752]: E0122 11:30:26.098474 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:30:26 crc kubenswrapper[4752]: I0122 11:30:26.110430 4752 scope.go:117] "RemoveContainer" containerID="74addac4eceec6e31b1585d5f37b2c70d43a564cc343490659b850be5641751f" Jan 22 11:30:26 crc kubenswrapper[4752]: I0122 11:30:26.135724 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-skbgn"] Jan 22 11:30:26 crc kubenswrapper[4752]: I0122 11:30:26.149243 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-skbgn"] Jan 22 11:30:26 crc kubenswrapper[4752]: I0122 11:30:26.156076 4752 scope.go:117] "RemoveContainer" containerID="56ed57d5686dbe15bf20a036e2aa00f859a58a265355158ed1697adedefeb124" Jan 22 11:30:26 crc kubenswrapper[4752]: I0122 11:30:26.187703 4752 scope.go:117] "RemoveContainer" containerID="67396a157c52b5b20a86b5c2f10a4519ca87277aa94f2637084ab94e0b9deb47" Jan 22 11:30:26 crc kubenswrapper[4752]: E0122 11:30:26.190252 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67396a157c52b5b20a86b5c2f10a4519ca87277aa94f2637084ab94e0b9deb47\": container with ID starting with 67396a157c52b5b20a86b5c2f10a4519ca87277aa94f2637084ab94e0b9deb47 not found: ID does not exist" containerID="67396a157c52b5b20a86b5c2f10a4519ca87277aa94f2637084ab94e0b9deb47" Jan 22 11:30:26 crc kubenswrapper[4752]: I0122 11:30:26.190285 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67396a157c52b5b20a86b5c2f10a4519ca87277aa94f2637084ab94e0b9deb47"} err="failed to get container status \"67396a157c52b5b20a86b5c2f10a4519ca87277aa94f2637084ab94e0b9deb47\": rpc error: code = NotFound desc = could not find container \"67396a157c52b5b20a86b5c2f10a4519ca87277aa94f2637084ab94e0b9deb47\": container with ID starting with 67396a157c52b5b20a86b5c2f10a4519ca87277aa94f2637084ab94e0b9deb47 not found: ID does not exist" Jan 22 11:30:26 crc kubenswrapper[4752]: I0122 11:30:26.190310 4752 scope.go:117] "RemoveContainer" containerID="74addac4eceec6e31b1585d5f37b2c70d43a564cc343490659b850be5641751f" Jan 22 11:30:26 crc kubenswrapper[4752]: E0122 11:30:26.190668 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74addac4eceec6e31b1585d5f37b2c70d43a564cc343490659b850be5641751f\": container with ID starting with 74addac4eceec6e31b1585d5f37b2c70d43a564cc343490659b850be5641751f not found: ID does not exist" containerID="74addac4eceec6e31b1585d5f37b2c70d43a564cc343490659b850be5641751f" Jan 22 11:30:26 crc kubenswrapper[4752]: I0122 11:30:26.190694 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74addac4eceec6e31b1585d5f37b2c70d43a564cc343490659b850be5641751f"} err="failed to get container status \"74addac4eceec6e31b1585d5f37b2c70d43a564cc343490659b850be5641751f\": rpc error: code = NotFound desc = could not find container \"74addac4eceec6e31b1585d5f37b2c70d43a564cc343490659b850be5641751f\": container with ID starting with 74addac4eceec6e31b1585d5f37b2c70d43a564cc343490659b850be5641751f not found: ID does not exist" Jan 22 11:30:26 crc kubenswrapper[4752]: I0122 11:30:26.190709 4752 scope.go:117] "RemoveContainer" containerID="56ed57d5686dbe15bf20a036e2aa00f859a58a265355158ed1697adedefeb124" Jan 22 11:30:26 crc kubenswrapper[4752]: E0122 11:30:26.193775 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56ed57d5686dbe15bf20a036e2aa00f859a58a265355158ed1697adedefeb124\": container with ID starting with 56ed57d5686dbe15bf20a036e2aa00f859a58a265355158ed1697adedefeb124 not found: ID does not exist" containerID="56ed57d5686dbe15bf20a036e2aa00f859a58a265355158ed1697adedefeb124" Jan 22 11:30:26 crc kubenswrapper[4752]: I0122 11:30:26.193804 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56ed57d5686dbe15bf20a036e2aa00f859a58a265355158ed1697adedefeb124"} err="failed to get container status \"56ed57d5686dbe15bf20a036e2aa00f859a58a265355158ed1697adedefeb124\": rpc error: code = NotFound desc = could not find container \"56ed57d5686dbe15bf20a036e2aa00f859a58a265355158ed1697adedefeb124\": container with ID starting with 56ed57d5686dbe15bf20a036e2aa00f859a58a265355158ed1697adedefeb124 not found: ID does not exist" Jan 22 11:30:27 crc kubenswrapper[4752]: I0122 11:30:27.109184 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca" path="/var/lib/kubelet/pods/53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca/volumes" Jan 22 11:30:37 crc kubenswrapper[4752]: I0122 11:30:37.098241 4752 scope.go:117] "RemoveContainer" containerID="0a2ab070c16147a868fce7b4d4b9b4f8c7c8a200748a5e0630aaa36e65d64468" Jan 22 11:30:37 crc kubenswrapper[4752]: E0122 11:30:37.099240 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:30:47 crc kubenswrapper[4752]: I0122 11:30:47.295026 4752 scope.go:117] "RemoveContainer" containerID="497c45bb9e441b8f3b1ee9f96ecfac6c5f1e55e78c2ee19d51bf48d82410747c" Jan 22 11:30:52 crc kubenswrapper[4752]: I0122 11:30:52.097926 4752 scope.go:117] "RemoveContainer" containerID="0a2ab070c16147a868fce7b4d4b9b4f8c7c8a200748a5e0630aaa36e65d64468" Jan 22 11:30:52 crc kubenswrapper[4752]: E0122 11:30:52.098766 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:31:06 crc kubenswrapper[4752]: I0122 11:31:06.099016 4752 scope.go:117] "RemoveContainer" containerID="0a2ab070c16147a868fce7b4d4b9b4f8c7c8a200748a5e0630aaa36e65d64468" Jan 22 11:31:06 crc kubenswrapper[4752]: E0122 11:31:06.099671 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:31:21 crc kubenswrapper[4752]: I0122 11:31:21.107989 4752 scope.go:117] "RemoveContainer" containerID="0a2ab070c16147a868fce7b4d4b9b4f8c7c8a200748a5e0630aaa36e65d64468" Jan 22 11:31:21 crc kubenswrapper[4752]: E0122 11:31:21.109366 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:31:32 crc kubenswrapper[4752]: I0122 11:31:32.098438 4752 scope.go:117] "RemoveContainer" containerID="0a2ab070c16147a868fce7b4d4b9b4f8c7c8a200748a5e0630aaa36e65d64468" Jan 22 11:31:32 crc kubenswrapper[4752]: E0122 11:31:32.099457 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:31:44 crc kubenswrapper[4752]: I0122 11:31:44.099085 4752 scope.go:117] "RemoveContainer" containerID="0a2ab070c16147a868fce7b4d4b9b4f8c7c8a200748a5e0630aaa36e65d64468" Jan 22 11:31:44 crc kubenswrapper[4752]: E0122 11:31:44.100419 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:31:58 crc kubenswrapper[4752]: I0122 11:31:58.098758 4752 scope.go:117] "RemoveContainer" containerID="0a2ab070c16147a868fce7b4d4b9b4f8c7c8a200748a5e0630aaa36e65d64468" Jan 22 11:31:58 crc kubenswrapper[4752]: E0122 11:31:58.099574 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:32:11 crc kubenswrapper[4752]: I0122 11:32:11.104475 4752 scope.go:117] "RemoveContainer" containerID="0a2ab070c16147a868fce7b4d4b9b4f8c7c8a200748a5e0630aaa36e65d64468" Jan 22 11:32:11 crc kubenswrapper[4752]: E0122 11:32:11.105200 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:32:22 crc kubenswrapper[4752]: I0122 11:32:22.098356 4752 scope.go:117] "RemoveContainer" containerID="0a2ab070c16147a868fce7b4d4b9b4f8c7c8a200748a5e0630aaa36e65d64468" Jan 22 11:32:22 crc kubenswrapper[4752]: E0122 11:32:22.099101 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:32:35 crc kubenswrapper[4752]: I0122 11:32:35.098957 4752 scope.go:117] "RemoveContainer" containerID="0a2ab070c16147a868fce7b4d4b9b4f8c7c8a200748a5e0630aaa36e65d64468" Jan 22 11:32:35 crc kubenswrapper[4752]: E0122 11:32:35.100196 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:32:49 crc kubenswrapper[4752]: I0122 11:32:49.098339 4752 scope.go:117] "RemoveContainer" containerID="0a2ab070c16147a868fce7b4d4b9b4f8c7c8a200748a5e0630aaa36e65d64468" Jan 22 11:32:49 crc kubenswrapper[4752]: E0122 11:32:49.099207 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:33:03 crc kubenswrapper[4752]: I0122 11:33:03.097971 4752 scope.go:117] "RemoveContainer" containerID="0a2ab070c16147a868fce7b4d4b9b4f8c7c8a200748a5e0630aaa36e65d64468" Jan 22 11:33:03 crc kubenswrapper[4752]: E0122 11:33:03.098739 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:33:10 crc kubenswrapper[4752]: I0122 11:33:10.434564 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pjndt"] Jan 22 11:33:10 crc kubenswrapper[4752]: E0122 11:33:10.435307 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca" containerName="extract-content" Jan 22 11:33:10 crc kubenswrapper[4752]: I0122 11:33:10.435320 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca" containerName="extract-content" Jan 22 11:33:10 crc kubenswrapper[4752]: E0122 11:33:10.435349 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca" containerName="extract-utilities" Jan 22 11:33:10 crc kubenswrapper[4752]: I0122 11:33:10.435356 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca" containerName="extract-utilities" Jan 22 11:33:10 crc kubenswrapper[4752]: E0122 11:33:10.435378 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca" containerName="registry-server" Jan 22 11:33:10 crc kubenswrapper[4752]: I0122 11:33:10.435384 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca" containerName="registry-server" Jan 22 11:33:10 crc kubenswrapper[4752]: I0122 11:33:10.435563 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="53fb3186-4b4b-4b8f-a5ab-5c060f7ee2ca" containerName="registry-server" Jan 22 11:33:10 crc kubenswrapper[4752]: I0122 11:33:10.437174 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pjndt" Jan 22 11:33:10 crc kubenswrapper[4752]: I0122 11:33:10.458904 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pjndt"] Jan 22 11:33:10 crc kubenswrapper[4752]: I0122 11:33:10.570608 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvn5c\" (UniqueName: \"kubernetes.io/projected/c8f2f6ac-9877-4c24-9427-32db094eefee-kube-api-access-nvn5c\") pod \"redhat-operators-pjndt\" (UID: \"c8f2f6ac-9877-4c24-9427-32db094eefee\") " pod="openshift-marketplace/redhat-operators-pjndt" Jan 22 11:33:10 crc kubenswrapper[4752]: I0122 11:33:10.570981 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8f2f6ac-9877-4c24-9427-32db094eefee-utilities\") pod \"redhat-operators-pjndt\" (UID: \"c8f2f6ac-9877-4c24-9427-32db094eefee\") " pod="openshift-marketplace/redhat-operators-pjndt" Jan 22 11:33:10 crc kubenswrapper[4752]: I0122 11:33:10.571323 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8f2f6ac-9877-4c24-9427-32db094eefee-catalog-content\") pod \"redhat-operators-pjndt\" (UID: \"c8f2f6ac-9877-4c24-9427-32db094eefee\") " pod="openshift-marketplace/redhat-operators-pjndt" Jan 22 11:33:10 crc kubenswrapper[4752]: I0122 11:33:10.673078 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8f2f6ac-9877-4c24-9427-32db094eefee-catalog-content\") pod \"redhat-operators-pjndt\" (UID: \"c8f2f6ac-9877-4c24-9427-32db094eefee\") " pod="openshift-marketplace/redhat-operators-pjndt" Jan 22 11:33:10 crc kubenswrapper[4752]: I0122 11:33:10.673203 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvn5c\" (UniqueName: \"kubernetes.io/projected/c8f2f6ac-9877-4c24-9427-32db094eefee-kube-api-access-nvn5c\") pod \"redhat-operators-pjndt\" (UID: \"c8f2f6ac-9877-4c24-9427-32db094eefee\") " pod="openshift-marketplace/redhat-operators-pjndt" Jan 22 11:33:10 crc kubenswrapper[4752]: I0122 11:33:10.673252 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8f2f6ac-9877-4c24-9427-32db094eefee-utilities\") pod \"redhat-operators-pjndt\" (UID: \"c8f2f6ac-9877-4c24-9427-32db094eefee\") " pod="openshift-marketplace/redhat-operators-pjndt" Jan 22 11:33:10 crc kubenswrapper[4752]: I0122 11:33:10.673877 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8f2f6ac-9877-4c24-9427-32db094eefee-utilities\") pod \"redhat-operators-pjndt\" (UID: \"c8f2f6ac-9877-4c24-9427-32db094eefee\") " pod="openshift-marketplace/redhat-operators-pjndt" Jan 22 11:33:10 crc kubenswrapper[4752]: I0122 11:33:10.674558 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8f2f6ac-9877-4c24-9427-32db094eefee-catalog-content\") pod \"redhat-operators-pjndt\" (UID: \"c8f2f6ac-9877-4c24-9427-32db094eefee\") " pod="openshift-marketplace/redhat-operators-pjndt" Jan 22 11:33:11 crc kubenswrapper[4752]: I0122 11:33:11.168210 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvn5c\" (UniqueName: \"kubernetes.io/projected/c8f2f6ac-9877-4c24-9427-32db094eefee-kube-api-access-nvn5c\") pod \"redhat-operators-pjndt\" (UID: \"c8f2f6ac-9877-4c24-9427-32db094eefee\") " pod="openshift-marketplace/redhat-operators-pjndt" Jan 22 11:33:11 crc kubenswrapper[4752]: I0122 11:33:11.356719 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pjndt" Jan 22 11:33:12 crc kubenswrapper[4752]: I0122 11:33:12.077717 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pjndt"] Jan 22 11:33:12 crc kubenswrapper[4752]: I0122 11:33:12.724120 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjndt" event={"ID":"c8f2f6ac-9877-4c24-9427-32db094eefee","Type":"ContainerStarted","Data":"0a513087afd75e6f28d763a15e139d68aed671cba748e1b4afde6df831fd1b21"} Jan 22 11:33:12 crc kubenswrapper[4752]: I0122 11:33:12.724431 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjndt" event={"ID":"c8f2f6ac-9877-4c24-9427-32db094eefee","Type":"ContainerStarted","Data":"7d79fb276bd798e3902cf80564c5317c6fdbd640dda63dd5ece916c7aff92a30"} Jan 22 11:33:13 crc kubenswrapper[4752]: I0122 11:33:13.736801 4752 generic.go:334] "Generic (PLEG): container finished" podID="c8f2f6ac-9877-4c24-9427-32db094eefee" containerID="0a513087afd75e6f28d763a15e139d68aed671cba748e1b4afde6df831fd1b21" exitCode=0 Jan 22 11:33:13 crc kubenswrapper[4752]: I0122 11:33:13.736997 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjndt" event={"ID":"c8f2f6ac-9877-4c24-9427-32db094eefee","Type":"ContainerDied","Data":"0a513087afd75e6f28d763a15e139d68aed671cba748e1b4afde6df831fd1b21"} Jan 22 11:33:15 crc kubenswrapper[4752]: I0122 11:33:15.099076 4752 scope.go:117] "RemoveContainer" containerID="0a2ab070c16147a868fce7b4d4b9b4f8c7c8a200748a5e0630aaa36e65d64468" Jan 22 11:33:15 crc kubenswrapper[4752]: E0122 11:33:15.099978 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:33:16 crc kubenswrapper[4752]: I0122 11:33:16.766351 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjndt" event={"ID":"c8f2f6ac-9877-4c24-9427-32db094eefee","Type":"ContainerStarted","Data":"a9f6bb7b210c80cd802265b2e03df3c54f8c7fa33c46e0ee249d85d89701a307"} Jan 22 11:33:22 crc kubenswrapper[4752]: I0122 11:33:22.830954 4752 generic.go:334] "Generic (PLEG): container finished" podID="c8f2f6ac-9877-4c24-9427-32db094eefee" containerID="a9f6bb7b210c80cd802265b2e03df3c54f8c7fa33c46e0ee249d85d89701a307" exitCode=0 Jan 22 11:33:22 crc kubenswrapper[4752]: I0122 11:33:22.831066 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjndt" event={"ID":"c8f2f6ac-9877-4c24-9427-32db094eefee","Type":"ContainerDied","Data":"a9f6bb7b210c80cd802265b2e03df3c54f8c7fa33c46e0ee249d85d89701a307"} Jan 22 11:33:25 crc kubenswrapper[4752]: I0122 11:33:25.861641 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjndt" event={"ID":"c8f2f6ac-9877-4c24-9427-32db094eefee","Type":"ContainerStarted","Data":"961ad80047067112d6485ee59e0a111190f64ca7d1adda33e0355285021f4880"} Jan 22 11:33:25 crc kubenswrapper[4752]: I0122 11:33:25.885390 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pjndt" podStartSLOduration=4.8264669 podStartE2EDuration="15.885370829s" podCreationTimestamp="2026-01-22 11:33:10 +0000 UTC" firstStartedPulling="2026-01-22 11:33:13.740318444 +0000 UTC m=+4072.970261362" lastFinishedPulling="2026-01-22 11:33:24.799222383 +0000 UTC m=+4084.029165291" observedRunningTime="2026-01-22 11:33:25.879139007 +0000 UTC m=+4085.109081915" watchObservedRunningTime="2026-01-22 11:33:25.885370829 +0000 UTC m=+4085.115313747" Jan 22 11:33:29 crc kubenswrapper[4752]: I0122 11:33:29.100694 4752 scope.go:117] "RemoveContainer" containerID="0a2ab070c16147a868fce7b4d4b9b4f8c7c8a200748a5e0630aaa36e65d64468" Jan 22 11:33:29 crc kubenswrapper[4752]: E0122 11:33:29.103366 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:33:31 crc kubenswrapper[4752]: I0122 11:33:31.357241 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pjndt" Jan 22 11:33:31 crc kubenswrapper[4752]: I0122 11:33:31.357825 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pjndt" Jan 22 11:33:31 crc kubenswrapper[4752]: I0122 11:33:31.408380 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pjndt" Jan 22 11:33:32 crc kubenswrapper[4752]: I0122 11:33:32.045205 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pjndt" Jan 22 11:33:32 crc kubenswrapper[4752]: I0122 11:33:32.103502 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pjndt"] Jan 22 11:33:33 crc kubenswrapper[4752]: I0122 11:33:33.944837 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pjndt" podUID="c8f2f6ac-9877-4c24-9427-32db094eefee" containerName="registry-server" containerID="cri-o://961ad80047067112d6485ee59e0a111190f64ca7d1adda33e0355285021f4880" gracePeriod=2 Jan 22 11:33:34 crc kubenswrapper[4752]: I0122 11:33:34.958911 4752 generic.go:334] "Generic (PLEG): container finished" podID="c8f2f6ac-9877-4c24-9427-32db094eefee" containerID="961ad80047067112d6485ee59e0a111190f64ca7d1adda33e0355285021f4880" exitCode=0 Jan 22 11:33:34 crc kubenswrapper[4752]: I0122 11:33:34.958946 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjndt" event={"ID":"c8f2f6ac-9877-4c24-9427-32db094eefee","Type":"ContainerDied","Data":"961ad80047067112d6485ee59e0a111190f64ca7d1adda33e0355285021f4880"} Jan 22 11:33:36 crc kubenswrapper[4752]: I0122 11:33:36.508660 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pjndt" Jan 22 11:33:36 crc kubenswrapper[4752]: I0122 11:33:36.553682 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8f2f6ac-9877-4c24-9427-32db094eefee-utilities\") pod \"c8f2f6ac-9877-4c24-9427-32db094eefee\" (UID: \"c8f2f6ac-9877-4c24-9427-32db094eefee\") " Jan 22 11:33:36 crc kubenswrapper[4752]: I0122 11:33:36.553983 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8f2f6ac-9877-4c24-9427-32db094eefee-catalog-content\") pod \"c8f2f6ac-9877-4c24-9427-32db094eefee\" (UID: \"c8f2f6ac-9877-4c24-9427-32db094eefee\") " Jan 22 11:33:36 crc kubenswrapper[4752]: I0122 11:33:36.554287 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvn5c\" (UniqueName: \"kubernetes.io/projected/c8f2f6ac-9877-4c24-9427-32db094eefee-kube-api-access-nvn5c\") pod \"c8f2f6ac-9877-4c24-9427-32db094eefee\" (UID: \"c8f2f6ac-9877-4c24-9427-32db094eefee\") " Jan 22 11:33:36 crc kubenswrapper[4752]: I0122 11:33:36.554390 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8f2f6ac-9877-4c24-9427-32db094eefee-utilities" (OuterVolumeSpecName: "utilities") pod "c8f2f6ac-9877-4c24-9427-32db094eefee" (UID: "c8f2f6ac-9877-4c24-9427-32db094eefee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:33:36 crc kubenswrapper[4752]: I0122 11:33:36.556514 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8f2f6ac-9877-4c24-9427-32db094eefee-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:33:36 crc kubenswrapper[4752]: I0122 11:33:36.560131 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8f2f6ac-9877-4c24-9427-32db094eefee-kube-api-access-nvn5c" (OuterVolumeSpecName: "kube-api-access-nvn5c") pod "c8f2f6ac-9877-4c24-9427-32db094eefee" (UID: "c8f2f6ac-9877-4c24-9427-32db094eefee"). InnerVolumeSpecName "kube-api-access-nvn5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:33:36 crc kubenswrapper[4752]: I0122 11:33:36.658262 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvn5c\" (UniqueName: \"kubernetes.io/projected/c8f2f6ac-9877-4c24-9427-32db094eefee-kube-api-access-nvn5c\") on node \"crc\" DevicePath \"\"" Jan 22 11:33:36 crc kubenswrapper[4752]: I0122 11:33:36.701485 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8f2f6ac-9877-4c24-9427-32db094eefee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8f2f6ac-9877-4c24-9427-32db094eefee" (UID: "c8f2f6ac-9877-4c24-9427-32db094eefee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:33:36 crc kubenswrapper[4752]: I0122 11:33:36.761311 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8f2f6ac-9877-4c24-9427-32db094eefee-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:33:36 crc kubenswrapper[4752]: I0122 11:33:36.979285 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pjndt" event={"ID":"c8f2f6ac-9877-4c24-9427-32db094eefee","Type":"ContainerDied","Data":"7d79fb276bd798e3902cf80564c5317c6fdbd640dda63dd5ece916c7aff92a30"} Jan 22 11:33:36 crc kubenswrapper[4752]: I0122 11:33:36.979357 4752 scope.go:117] "RemoveContainer" containerID="961ad80047067112d6485ee59e0a111190f64ca7d1adda33e0355285021f4880" Jan 22 11:33:36 crc kubenswrapper[4752]: I0122 11:33:36.979429 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pjndt" Jan 22 11:33:37 crc kubenswrapper[4752]: I0122 11:33:37.012808 4752 scope.go:117] "RemoveContainer" containerID="a9f6bb7b210c80cd802265b2e03df3c54f8c7fa33c46e0ee249d85d89701a307" Jan 22 11:33:37 crc kubenswrapper[4752]: I0122 11:33:37.020686 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pjndt"] Jan 22 11:33:37 crc kubenswrapper[4752]: I0122 11:33:37.035513 4752 scope.go:117] "RemoveContainer" containerID="0a513087afd75e6f28d763a15e139d68aed671cba748e1b4afde6df831fd1b21" Jan 22 11:33:37 crc kubenswrapper[4752]: I0122 11:33:37.042901 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pjndt"] Jan 22 11:33:37 crc kubenswrapper[4752]: I0122 11:33:37.109628 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8f2f6ac-9877-4c24-9427-32db094eefee" path="/var/lib/kubelet/pods/c8f2f6ac-9877-4c24-9427-32db094eefee/volumes" Jan 22 11:33:40 crc kubenswrapper[4752]: I0122 11:33:40.098369 4752 scope.go:117] "RemoveContainer" containerID="0a2ab070c16147a868fce7b4d4b9b4f8c7c8a200748a5e0630aaa36e65d64468" Jan 22 11:33:40 crc kubenswrapper[4752]: E0122 11:33:40.099088 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:33:52 crc kubenswrapper[4752]: I0122 11:33:52.098227 4752 scope.go:117] "RemoveContainer" containerID="0a2ab070c16147a868fce7b4d4b9b4f8c7c8a200748a5e0630aaa36e65d64468" Jan 22 11:33:52 crc kubenswrapper[4752]: E0122 11:33:52.099368 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:34:04 crc kubenswrapper[4752]: I0122 11:34:04.099237 4752 scope.go:117] "RemoveContainer" containerID="0a2ab070c16147a868fce7b4d4b9b4f8c7c8a200748a5e0630aaa36e65d64468" Jan 22 11:34:04 crc kubenswrapper[4752]: E0122 11:34:04.100795 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:34:15 crc kubenswrapper[4752]: I0122 11:34:15.927008 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cwgkt"] Jan 22 11:34:15 crc kubenswrapper[4752]: E0122 11:34:15.928407 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f2f6ac-9877-4c24-9427-32db094eefee" containerName="extract-content" Jan 22 11:34:15 crc kubenswrapper[4752]: I0122 11:34:15.928424 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f2f6ac-9877-4c24-9427-32db094eefee" containerName="extract-content" Jan 22 11:34:15 crc kubenswrapper[4752]: E0122 11:34:15.928491 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f2f6ac-9877-4c24-9427-32db094eefee" containerName="registry-server" Jan 22 11:34:15 crc kubenswrapper[4752]: I0122 11:34:15.928501 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f2f6ac-9877-4c24-9427-32db094eefee" containerName="registry-server" Jan 22 11:34:15 crc kubenswrapper[4752]: E0122 11:34:15.928513 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f2f6ac-9877-4c24-9427-32db094eefee" containerName="extract-utilities" Jan 22 11:34:15 crc kubenswrapper[4752]: I0122 11:34:15.928522 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f2f6ac-9877-4c24-9427-32db094eefee" containerName="extract-utilities" Jan 22 11:34:15 crc kubenswrapper[4752]: I0122 11:34:15.928751 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8f2f6ac-9877-4c24-9427-32db094eefee" containerName="registry-server" Jan 22 11:34:15 crc kubenswrapper[4752]: I0122 11:34:15.930567 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cwgkt" Jan 22 11:34:15 crc kubenswrapper[4752]: I0122 11:34:15.941245 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cwgkt"] Jan 22 11:34:16 crc kubenswrapper[4752]: I0122 11:34:16.091932 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b0a4217-823d-497c-a68f-29c9a7723af9-catalog-content\") pod \"community-operators-cwgkt\" (UID: \"9b0a4217-823d-497c-a68f-29c9a7723af9\") " pod="openshift-marketplace/community-operators-cwgkt" Jan 22 11:34:16 crc kubenswrapper[4752]: I0122 11:34:16.092175 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqg6k\" (UniqueName: \"kubernetes.io/projected/9b0a4217-823d-497c-a68f-29c9a7723af9-kube-api-access-jqg6k\") pod \"community-operators-cwgkt\" (UID: \"9b0a4217-823d-497c-a68f-29c9a7723af9\") " pod="openshift-marketplace/community-operators-cwgkt" Jan 22 11:34:16 crc kubenswrapper[4752]: I0122 11:34:16.092235 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b0a4217-823d-497c-a68f-29c9a7723af9-utilities\") pod \"community-operators-cwgkt\" (UID: \"9b0a4217-823d-497c-a68f-29c9a7723af9\") " pod="openshift-marketplace/community-operators-cwgkt" Jan 22 11:34:16 crc kubenswrapper[4752]: I0122 11:34:16.194407 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqg6k\" (UniqueName: \"kubernetes.io/projected/9b0a4217-823d-497c-a68f-29c9a7723af9-kube-api-access-jqg6k\") pod \"community-operators-cwgkt\" (UID: \"9b0a4217-823d-497c-a68f-29c9a7723af9\") " pod="openshift-marketplace/community-operators-cwgkt" Jan 22 11:34:16 crc kubenswrapper[4752]: I0122 11:34:16.194487 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b0a4217-823d-497c-a68f-29c9a7723af9-utilities\") pod \"community-operators-cwgkt\" (UID: \"9b0a4217-823d-497c-a68f-29c9a7723af9\") " pod="openshift-marketplace/community-operators-cwgkt" Jan 22 11:34:16 crc kubenswrapper[4752]: I0122 11:34:16.194748 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b0a4217-823d-497c-a68f-29c9a7723af9-catalog-content\") pod \"community-operators-cwgkt\" (UID: \"9b0a4217-823d-497c-a68f-29c9a7723af9\") " pod="openshift-marketplace/community-operators-cwgkt" Jan 22 11:34:16 crc kubenswrapper[4752]: I0122 11:34:16.195078 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b0a4217-823d-497c-a68f-29c9a7723af9-utilities\") pod \"community-operators-cwgkt\" (UID: \"9b0a4217-823d-497c-a68f-29c9a7723af9\") " pod="openshift-marketplace/community-operators-cwgkt" Jan 22 11:34:16 crc kubenswrapper[4752]: I0122 11:34:16.195235 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b0a4217-823d-497c-a68f-29c9a7723af9-catalog-content\") pod \"community-operators-cwgkt\" (UID: \"9b0a4217-823d-497c-a68f-29c9a7723af9\") " pod="openshift-marketplace/community-operators-cwgkt" Jan 22 11:34:16 crc kubenswrapper[4752]: I0122 11:34:16.220602 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqg6k\" (UniqueName: \"kubernetes.io/projected/9b0a4217-823d-497c-a68f-29c9a7723af9-kube-api-access-jqg6k\") pod \"community-operators-cwgkt\" (UID: \"9b0a4217-823d-497c-a68f-29c9a7723af9\") " pod="openshift-marketplace/community-operators-cwgkt" Jan 22 11:34:16 crc kubenswrapper[4752]: I0122 11:34:16.258463 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cwgkt" Jan 22 11:34:16 crc kubenswrapper[4752]: I0122 11:34:16.937433 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cwgkt"] Jan 22 11:34:17 crc kubenswrapper[4752]: I0122 11:34:17.414809 4752 generic.go:334] "Generic (PLEG): container finished" podID="9b0a4217-823d-497c-a68f-29c9a7723af9" containerID="b4a950b0deb1d142c4a53a92e3ae87f69c626cac5a666d6dbf22612ee60f4496" exitCode=0 Jan 22 11:34:17 crc kubenswrapper[4752]: I0122 11:34:17.414921 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cwgkt" event={"ID":"9b0a4217-823d-497c-a68f-29c9a7723af9","Type":"ContainerDied","Data":"b4a950b0deb1d142c4a53a92e3ae87f69c626cac5a666d6dbf22612ee60f4496"} Jan 22 11:34:17 crc kubenswrapper[4752]: I0122 11:34:17.414961 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cwgkt" event={"ID":"9b0a4217-823d-497c-a68f-29c9a7723af9","Type":"ContainerStarted","Data":"2e92a92158b5ed40a3344068c435d5f856feeaf1f395fb3ee03c3834d523ba3c"} Jan 22 11:34:19 crc kubenswrapper[4752]: I0122 11:34:19.098773 4752 scope.go:117] "RemoveContainer" containerID="0a2ab070c16147a868fce7b4d4b9b4f8c7c8a200748a5e0630aaa36e65d64468" Jan 22 11:34:19 crc kubenswrapper[4752]: E0122 11:34:19.099220 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:34:19 crc kubenswrapper[4752]: I0122 11:34:19.440805 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cwgkt" event={"ID":"9b0a4217-823d-497c-a68f-29c9a7723af9","Type":"ContainerStarted","Data":"1b72e61c31761053e084e7952d10af3abecc1d1a01e7ea868af21f0911ca2155"} Jan 22 11:34:20 crc kubenswrapper[4752]: I0122 11:34:20.460018 4752 generic.go:334] "Generic (PLEG): container finished" podID="9b0a4217-823d-497c-a68f-29c9a7723af9" containerID="1b72e61c31761053e084e7952d10af3abecc1d1a01e7ea868af21f0911ca2155" exitCode=0 Jan 22 11:34:20 crc kubenswrapper[4752]: I0122 11:34:20.460900 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cwgkt" event={"ID":"9b0a4217-823d-497c-a68f-29c9a7723af9","Type":"ContainerDied","Data":"1b72e61c31761053e084e7952d10af3abecc1d1a01e7ea868af21f0911ca2155"} Jan 22 11:34:21 crc kubenswrapper[4752]: I0122 11:34:21.486136 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cwgkt" event={"ID":"9b0a4217-823d-497c-a68f-29c9a7723af9","Type":"ContainerStarted","Data":"233a94bfb0bd859573fd593185b74a90e82e66a3d7f558bdff8b070a240a2e18"} Jan 22 11:34:21 crc kubenswrapper[4752]: I0122 11:34:21.516751 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cwgkt" podStartSLOduration=3.044150189 podStartE2EDuration="6.516717311s" podCreationTimestamp="2026-01-22 11:34:15 +0000 UTC" firstStartedPulling="2026-01-22 11:34:17.418954414 +0000 UTC m=+4136.648897322" lastFinishedPulling="2026-01-22 11:34:20.891521536 +0000 UTC m=+4140.121464444" observedRunningTime="2026-01-22 11:34:21.508377104 +0000 UTC m=+4140.738320032" watchObservedRunningTime="2026-01-22 11:34:21.516717311 +0000 UTC m=+4140.746660219" Jan 22 11:34:26 crc kubenswrapper[4752]: I0122 11:34:26.258968 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cwgkt" Jan 22 11:34:26 crc kubenswrapper[4752]: I0122 11:34:26.259590 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cwgkt" Jan 22 11:34:26 crc kubenswrapper[4752]: I0122 11:34:26.332636 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cwgkt" Jan 22 11:34:26 crc kubenswrapper[4752]: I0122 11:34:26.581688 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cwgkt" Jan 22 11:34:26 crc kubenswrapper[4752]: I0122 11:34:26.644257 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cwgkt"] Jan 22 11:34:28 crc kubenswrapper[4752]: I0122 11:34:28.563320 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cwgkt" podUID="9b0a4217-823d-497c-a68f-29c9a7723af9" containerName="registry-server" containerID="cri-o://233a94bfb0bd859573fd593185b74a90e82e66a3d7f558bdff8b070a240a2e18" gracePeriod=2 Jan 22 11:34:29 crc kubenswrapper[4752]: I0122 11:34:29.123879 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cwgkt" Jan 22 11:34:29 crc kubenswrapper[4752]: I0122 11:34:29.243197 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b0a4217-823d-497c-a68f-29c9a7723af9-utilities\") pod \"9b0a4217-823d-497c-a68f-29c9a7723af9\" (UID: \"9b0a4217-823d-497c-a68f-29c9a7723af9\") " Jan 22 11:34:29 crc kubenswrapper[4752]: I0122 11:34:29.243264 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b0a4217-823d-497c-a68f-29c9a7723af9-catalog-content\") pod \"9b0a4217-823d-497c-a68f-29c9a7723af9\" (UID: \"9b0a4217-823d-497c-a68f-29c9a7723af9\") " Jan 22 11:34:29 crc kubenswrapper[4752]: I0122 11:34:29.243392 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqg6k\" (UniqueName: \"kubernetes.io/projected/9b0a4217-823d-497c-a68f-29c9a7723af9-kube-api-access-jqg6k\") pod \"9b0a4217-823d-497c-a68f-29c9a7723af9\" (UID: \"9b0a4217-823d-497c-a68f-29c9a7723af9\") " Jan 22 11:34:29 crc kubenswrapper[4752]: I0122 11:34:29.244530 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b0a4217-823d-497c-a68f-29c9a7723af9-utilities" (OuterVolumeSpecName: "utilities") pod "9b0a4217-823d-497c-a68f-29c9a7723af9" (UID: "9b0a4217-823d-497c-a68f-29c9a7723af9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:34:29 crc kubenswrapper[4752]: I0122 11:34:29.252645 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b0a4217-823d-497c-a68f-29c9a7723af9-kube-api-access-jqg6k" (OuterVolumeSpecName: "kube-api-access-jqg6k") pod "9b0a4217-823d-497c-a68f-29c9a7723af9" (UID: "9b0a4217-823d-497c-a68f-29c9a7723af9"). InnerVolumeSpecName "kube-api-access-jqg6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:34:29 crc kubenswrapper[4752]: I0122 11:34:29.307274 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b0a4217-823d-497c-a68f-29c9a7723af9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b0a4217-823d-497c-a68f-29c9a7723af9" (UID: "9b0a4217-823d-497c-a68f-29c9a7723af9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:34:29 crc kubenswrapper[4752]: I0122 11:34:29.346337 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b0a4217-823d-497c-a68f-29c9a7723af9-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:34:29 crc kubenswrapper[4752]: I0122 11:34:29.346391 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b0a4217-823d-497c-a68f-29c9a7723af9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:34:29 crc kubenswrapper[4752]: I0122 11:34:29.346404 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqg6k\" (UniqueName: \"kubernetes.io/projected/9b0a4217-823d-497c-a68f-29c9a7723af9-kube-api-access-jqg6k\") on node \"crc\" DevicePath \"\"" Jan 22 11:34:29 crc kubenswrapper[4752]: I0122 11:34:29.574260 4752 generic.go:334] "Generic (PLEG): container finished" podID="9b0a4217-823d-497c-a68f-29c9a7723af9" containerID="233a94bfb0bd859573fd593185b74a90e82e66a3d7f558bdff8b070a240a2e18" exitCode=0 Jan 22 11:34:29 crc kubenswrapper[4752]: I0122 11:34:29.574307 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cwgkt" event={"ID":"9b0a4217-823d-497c-a68f-29c9a7723af9","Type":"ContainerDied","Data":"233a94bfb0bd859573fd593185b74a90e82e66a3d7f558bdff8b070a240a2e18"} Jan 22 11:34:29 crc kubenswrapper[4752]: I0122 11:34:29.574321 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cwgkt" Jan 22 11:34:29 crc kubenswrapper[4752]: I0122 11:34:29.574343 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cwgkt" event={"ID":"9b0a4217-823d-497c-a68f-29c9a7723af9","Type":"ContainerDied","Data":"2e92a92158b5ed40a3344068c435d5f856feeaf1f395fb3ee03c3834d523ba3c"} Jan 22 11:34:29 crc kubenswrapper[4752]: I0122 11:34:29.574364 4752 scope.go:117] "RemoveContainer" containerID="233a94bfb0bd859573fd593185b74a90e82e66a3d7f558bdff8b070a240a2e18" Jan 22 11:34:29 crc kubenswrapper[4752]: I0122 11:34:29.598123 4752 scope.go:117] "RemoveContainer" containerID="1b72e61c31761053e084e7952d10af3abecc1d1a01e7ea868af21f0911ca2155" Jan 22 11:34:29 crc kubenswrapper[4752]: I0122 11:34:29.613892 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cwgkt"] Jan 22 11:34:29 crc kubenswrapper[4752]: I0122 11:34:29.653195 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cwgkt"] Jan 22 11:34:29 crc kubenswrapper[4752]: I0122 11:34:29.654012 4752 scope.go:117] "RemoveContainer" containerID="b4a950b0deb1d142c4a53a92e3ae87f69c626cac5a666d6dbf22612ee60f4496" Jan 22 11:34:29 crc kubenswrapper[4752]: I0122 11:34:29.685594 4752 scope.go:117] "RemoveContainer" containerID="233a94bfb0bd859573fd593185b74a90e82e66a3d7f558bdff8b070a240a2e18" Jan 22 11:34:29 crc kubenswrapper[4752]: E0122 11:34:29.686398 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"233a94bfb0bd859573fd593185b74a90e82e66a3d7f558bdff8b070a240a2e18\": container with ID starting with 233a94bfb0bd859573fd593185b74a90e82e66a3d7f558bdff8b070a240a2e18 not found: ID does not exist" containerID="233a94bfb0bd859573fd593185b74a90e82e66a3d7f558bdff8b070a240a2e18" Jan 22 11:34:29 crc kubenswrapper[4752]: I0122 11:34:29.686437 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"233a94bfb0bd859573fd593185b74a90e82e66a3d7f558bdff8b070a240a2e18"} err="failed to get container status \"233a94bfb0bd859573fd593185b74a90e82e66a3d7f558bdff8b070a240a2e18\": rpc error: code = NotFound desc = could not find container \"233a94bfb0bd859573fd593185b74a90e82e66a3d7f558bdff8b070a240a2e18\": container with ID starting with 233a94bfb0bd859573fd593185b74a90e82e66a3d7f558bdff8b070a240a2e18 not found: ID does not exist" Jan 22 11:34:29 crc kubenswrapper[4752]: I0122 11:34:29.686462 4752 scope.go:117] "RemoveContainer" containerID="1b72e61c31761053e084e7952d10af3abecc1d1a01e7ea868af21f0911ca2155" Jan 22 11:34:29 crc kubenswrapper[4752]: E0122 11:34:29.687273 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b72e61c31761053e084e7952d10af3abecc1d1a01e7ea868af21f0911ca2155\": container with ID starting with 1b72e61c31761053e084e7952d10af3abecc1d1a01e7ea868af21f0911ca2155 not found: ID does not exist" containerID="1b72e61c31761053e084e7952d10af3abecc1d1a01e7ea868af21f0911ca2155" Jan 22 11:34:29 crc kubenswrapper[4752]: I0122 11:34:29.687377 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b72e61c31761053e084e7952d10af3abecc1d1a01e7ea868af21f0911ca2155"} err="failed to get container status \"1b72e61c31761053e084e7952d10af3abecc1d1a01e7ea868af21f0911ca2155\": rpc error: code = NotFound desc = could not find container \"1b72e61c31761053e084e7952d10af3abecc1d1a01e7ea868af21f0911ca2155\": container with ID starting with 1b72e61c31761053e084e7952d10af3abecc1d1a01e7ea868af21f0911ca2155 not found: ID does not exist" Jan 22 11:34:29 crc kubenswrapper[4752]: I0122 11:34:29.687434 4752 scope.go:117] "RemoveContainer" containerID="b4a950b0deb1d142c4a53a92e3ae87f69c626cac5a666d6dbf22612ee60f4496" Jan 22 11:34:29 crc kubenswrapper[4752]: E0122 11:34:29.687931 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4a950b0deb1d142c4a53a92e3ae87f69c626cac5a666d6dbf22612ee60f4496\": container with ID starting with b4a950b0deb1d142c4a53a92e3ae87f69c626cac5a666d6dbf22612ee60f4496 not found: ID does not exist" containerID="b4a950b0deb1d142c4a53a92e3ae87f69c626cac5a666d6dbf22612ee60f4496" Jan 22 11:34:29 crc kubenswrapper[4752]: I0122 11:34:29.687967 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4a950b0deb1d142c4a53a92e3ae87f69c626cac5a666d6dbf22612ee60f4496"} err="failed to get container status \"b4a950b0deb1d142c4a53a92e3ae87f69c626cac5a666d6dbf22612ee60f4496\": rpc error: code = NotFound desc = could not find container \"b4a950b0deb1d142c4a53a92e3ae87f69c626cac5a666d6dbf22612ee60f4496\": container with ID starting with b4a950b0deb1d142c4a53a92e3ae87f69c626cac5a666d6dbf22612ee60f4496 not found: ID does not exist" Jan 22 11:34:31 crc kubenswrapper[4752]: I0122 11:34:31.112944 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b0a4217-823d-497c-a68f-29c9a7723af9" path="/var/lib/kubelet/pods/9b0a4217-823d-497c-a68f-29c9a7723af9/volumes" Jan 22 11:34:33 crc kubenswrapper[4752]: I0122 11:34:33.098366 4752 scope.go:117] "RemoveContainer" containerID="0a2ab070c16147a868fce7b4d4b9b4f8c7c8a200748a5e0630aaa36e65d64468" Jan 22 11:34:33 crc kubenswrapper[4752]: I0122 11:34:33.641065 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerStarted","Data":"da61bc29cc6e108a8b9eff72384c2d68f8124d29398c2fbbcef8ea281fb27923"} Jan 22 11:36:47 crc kubenswrapper[4752]: I0122 11:36:47.732211 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wlxt8"] Jan 22 11:36:47 crc kubenswrapper[4752]: E0122 11:36:47.745399 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b0a4217-823d-497c-a68f-29c9a7723af9" containerName="extract-utilities" Jan 22 11:36:47 crc kubenswrapper[4752]: I0122 11:36:47.749930 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b0a4217-823d-497c-a68f-29c9a7723af9" containerName="extract-utilities" Jan 22 11:36:47 crc kubenswrapper[4752]: E0122 11:36:47.750179 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b0a4217-823d-497c-a68f-29c9a7723af9" containerName="registry-server" Jan 22 11:36:47 crc kubenswrapper[4752]: I0122 11:36:47.750231 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b0a4217-823d-497c-a68f-29c9a7723af9" containerName="registry-server" Jan 22 11:36:47 crc kubenswrapper[4752]: E0122 11:36:47.750321 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b0a4217-823d-497c-a68f-29c9a7723af9" containerName="extract-content" Jan 22 11:36:47 crc kubenswrapper[4752]: I0122 11:36:47.750371 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b0a4217-823d-497c-a68f-29c9a7723af9" containerName="extract-content" Jan 22 11:36:47 crc kubenswrapper[4752]: I0122 11:36:47.751177 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b0a4217-823d-497c-a68f-29c9a7723af9" containerName="registry-server" Jan 22 11:36:47 crc kubenswrapper[4752]: I0122 11:36:47.756493 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wlxt8"] Jan 22 11:36:47 crc kubenswrapper[4752]: I0122 11:36:47.756706 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wlxt8" Jan 22 11:36:47 crc kubenswrapper[4752]: I0122 11:36:47.806145 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6dcd\" (UniqueName: \"kubernetes.io/projected/3c336d83-8666-4f11-84dd-effabc20ee8a-kube-api-access-j6dcd\") pod \"redhat-marketplace-wlxt8\" (UID: \"3c336d83-8666-4f11-84dd-effabc20ee8a\") " pod="openshift-marketplace/redhat-marketplace-wlxt8" Jan 22 11:36:47 crc kubenswrapper[4752]: I0122 11:36:47.806200 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c336d83-8666-4f11-84dd-effabc20ee8a-catalog-content\") pod \"redhat-marketplace-wlxt8\" (UID: \"3c336d83-8666-4f11-84dd-effabc20ee8a\") " pod="openshift-marketplace/redhat-marketplace-wlxt8" Jan 22 11:36:47 crc kubenswrapper[4752]: I0122 11:36:47.806448 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c336d83-8666-4f11-84dd-effabc20ee8a-utilities\") pod \"redhat-marketplace-wlxt8\" (UID: \"3c336d83-8666-4f11-84dd-effabc20ee8a\") " pod="openshift-marketplace/redhat-marketplace-wlxt8" Jan 22 11:36:47 crc kubenswrapper[4752]: I0122 11:36:47.908076 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6dcd\" (UniqueName: \"kubernetes.io/projected/3c336d83-8666-4f11-84dd-effabc20ee8a-kube-api-access-j6dcd\") pod \"redhat-marketplace-wlxt8\" (UID: \"3c336d83-8666-4f11-84dd-effabc20ee8a\") " pod="openshift-marketplace/redhat-marketplace-wlxt8" Jan 22 11:36:47 crc kubenswrapper[4752]: I0122 11:36:47.908406 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c336d83-8666-4f11-84dd-effabc20ee8a-catalog-content\") pod \"redhat-marketplace-wlxt8\" (UID: \"3c336d83-8666-4f11-84dd-effabc20ee8a\") " pod="openshift-marketplace/redhat-marketplace-wlxt8" Jan 22 11:36:47 crc kubenswrapper[4752]: I0122 11:36:47.908563 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c336d83-8666-4f11-84dd-effabc20ee8a-utilities\") pod \"redhat-marketplace-wlxt8\" (UID: \"3c336d83-8666-4f11-84dd-effabc20ee8a\") " pod="openshift-marketplace/redhat-marketplace-wlxt8" Jan 22 11:36:47 crc kubenswrapper[4752]: I0122 11:36:47.909033 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c336d83-8666-4f11-84dd-effabc20ee8a-catalog-content\") pod \"redhat-marketplace-wlxt8\" (UID: \"3c336d83-8666-4f11-84dd-effabc20ee8a\") " pod="openshift-marketplace/redhat-marketplace-wlxt8" Jan 22 11:36:47 crc kubenswrapper[4752]: I0122 11:36:47.909051 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c336d83-8666-4f11-84dd-effabc20ee8a-utilities\") pod \"redhat-marketplace-wlxt8\" (UID: \"3c336d83-8666-4f11-84dd-effabc20ee8a\") " pod="openshift-marketplace/redhat-marketplace-wlxt8" Jan 22 11:36:47 crc kubenswrapper[4752]: I0122 11:36:47.936083 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6dcd\" (UniqueName: \"kubernetes.io/projected/3c336d83-8666-4f11-84dd-effabc20ee8a-kube-api-access-j6dcd\") pod \"redhat-marketplace-wlxt8\" (UID: \"3c336d83-8666-4f11-84dd-effabc20ee8a\") " pod="openshift-marketplace/redhat-marketplace-wlxt8" Jan 22 11:36:48 crc kubenswrapper[4752]: I0122 11:36:48.078349 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wlxt8" Jan 22 11:36:48 crc kubenswrapper[4752]: I0122 11:36:48.581293 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wlxt8"] Jan 22 11:36:49 crc kubenswrapper[4752]: I0122 11:36:49.143365 4752 generic.go:334] "Generic (PLEG): container finished" podID="3c336d83-8666-4f11-84dd-effabc20ee8a" containerID="8cfeeb602ce3e656438d64bd3e9be95c06406d9c6ec1379c2b0fdf71300d6c09" exitCode=0 Jan 22 11:36:49 crc kubenswrapper[4752]: I0122 11:36:49.143622 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlxt8" event={"ID":"3c336d83-8666-4f11-84dd-effabc20ee8a","Type":"ContainerDied","Data":"8cfeeb602ce3e656438d64bd3e9be95c06406d9c6ec1379c2b0fdf71300d6c09"} Jan 22 11:36:49 crc kubenswrapper[4752]: I0122 11:36:49.144208 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlxt8" event={"ID":"3c336d83-8666-4f11-84dd-effabc20ee8a","Type":"ContainerStarted","Data":"6c69f469a1c3d36784441051853f22b45d139b4a8e702961281b91e4a085c833"} Jan 22 11:36:49 crc kubenswrapper[4752]: I0122 11:36:49.147016 4752 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 11:36:50 crc kubenswrapper[4752]: I0122 11:36:50.155172 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlxt8" event={"ID":"3c336d83-8666-4f11-84dd-effabc20ee8a","Type":"ContainerStarted","Data":"58ced1f343c02ba137bc7a8e482bea8ade25ce3012af987a39a3a936e070ff9f"} Jan 22 11:36:51 crc kubenswrapper[4752]: I0122 11:36:51.171951 4752 generic.go:334] "Generic (PLEG): container finished" podID="3c336d83-8666-4f11-84dd-effabc20ee8a" containerID="58ced1f343c02ba137bc7a8e482bea8ade25ce3012af987a39a3a936e070ff9f" exitCode=0 Jan 22 11:36:51 crc kubenswrapper[4752]: I0122 11:36:51.172103 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlxt8" event={"ID":"3c336d83-8666-4f11-84dd-effabc20ee8a","Type":"ContainerDied","Data":"58ced1f343c02ba137bc7a8e482bea8ade25ce3012af987a39a3a936e070ff9f"} Jan 22 11:36:52 crc kubenswrapper[4752]: I0122 11:36:52.185292 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlxt8" event={"ID":"3c336d83-8666-4f11-84dd-effabc20ee8a","Type":"ContainerStarted","Data":"763bec4b9229f485e5203563458c12dcfef02f17126da5b0417171c5b36faf8b"} Jan 22 11:36:52 crc kubenswrapper[4752]: I0122 11:36:52.212484 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wlxt8" podStartSLOduration=2.7454951530000002 podStartE2EDuration="5.212465645s" podCreationTimestamp="2026-01-22 11:36:47 +0000 UTC" firstStartedPulling="2026-01-22 11:36:49.146679608 +0000 UTC m=+4288.376622516" lastFinishedPulling="2026-01-22 11:36:51.6136501 +0000 UTC m=+4290.843593008" observedRunningTime="2026-01-22 11:36:52.205479563 +0000 UTC m=+4291.435422501" watchObservedRunningTime="2026-01-22 11:36:52.212465645 +0000 UTC m=+4291.442408553" Jan 22 11:36:57 crc kubenswrapper[4752]: I0122 11:36:57.723595 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:36:57 crc kubenswrapper[4752]: I0122 11:36:57.724122 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:36:58 crc kubenswrapper[4752]: I0122 11:36:58.078535 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wlxt8" Jan 22 11:36:58 crc kubenswrapper[4752]: I0122 11:36:58.078587 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wlxt8" Jan 22 11:36:58 crc kubenswrapper[4752]: I0122 11:36:58.128759 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wlxt8" Jan 22 11:36:58 crc kubenswrapper[4752]: I0122 11:36:58.338049 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wlxt8" Jan 22 11:37:01 crc kubenswrapper[4752]: I0122 11:37:01.507908 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wlxt8"] Jan 22 11:37:01 crc kubenswrapper[4752]: I0122 11:37:01.508713 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wlxt8" podUID="3c336d83-8666-4f11-84dd-effabc20ee8a" containerName="registry-server" containerID="cri-o://763bec4b9229f485e5203563458c12dcfef02f17126da5b0417171c5b36faf8b" gracePeriod=2 Jan 22 11:37:02 crc kubenswrapper[4752]: I0122 11:37:02.317765 4752 generic.go:334] "Generic (PLEG): container finished" podID="3c336d83-8666-4f11-84dd-effabc20ee8a" containerID="763bec4b9229f485e5203563458c12dcfef02f17126da5b0417171c5b36faf8b" exitCode=0 Jan 22 11:37:02 crc kubenswrapper[4752]: I0122 11:37:02.317829 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlxt8" event={"ID":"3c336d83-8666-4f11-84dd-effabc20ee8a","Type":"ContainerDied","Data":"763bec4b9229f485e5203563458c12dcfef02f17126da5b0417171c5b36faf8b"} Jan 22 11:37:02 crc kubenswrapper[4752]: I0122 11:37:02.564607 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wlxt8" Jan 22 11:37:02 crc kubenswrapper[4752]: I0122 11:37:02.652172 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c336d83-8666-4f11-84dd-effabc20ee8a-catalog-content\") pod \"3c336d83-8666-4f11-84dd-effabc20ee8a\" (UID: \"3c336d83-8666-4f11-84dd-effabc20ee8a\") " Jan 22 11:37:02 crc kubenswrapper[4752]: I0122 11:37:02.652308 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6dcd\" (UniqueName: \"kubernetes.io/projected/3c336d83-8666-4f11-84dd-effabc20ee8a-kube-api-access-j6dcd\") pod \"3c336d83-8666-4f11-84dd-effabc20ee8a\" (UID: \"3c336d83-8666-4f11-84dd-effabc20ee8a\") " Jan 22 11:37:02 crc kubenswrapper[4752]: I0122 11:37:02.652484 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c336d83-8666-4f11-84dd-effabc20ee8a-utilities\") pod \"3c336d83-8666-4f11-84dd-effabc20ee8a\" (UID: \"3c336d83-8666-4f11-84dd-effabc20ee8a\") " Jan 22 11:37:02 crc kubenswrapper[4752]: I0122 11:37:02.653710 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c336d83-8666-4f11-84dd-effabc20ee8a-utilities" (OuterVolumeSpecName: "utilities") pod "3c336d83-8666-4f11-84dd-effabc20ee8a" (UID: "3c336d83-8666-4f11-84dd-effabc20ee8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:37:02 crc kubenswrapper[4752]: I0122 11:37:02.659864 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c336d83-8666-4f11-84dd-effabc20ee8a-kube-api-access-j6dcd" (OuterVolumeSpecName: "kube-api-access-j6dcd") pod "3c336d83-8666-4f11-84dd-effabc20ee8a" (UID: "3c336d83-8666-4f11-84dd-effabc20ee8a"). InnerVolumeSpecName "kube-api-access-j6dcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:37:02 crc kubenswrapper[4752]: I0122 11:37:02.689548 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c336d83-8666-4f11-84dd-effabc20ee8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c336d83-8666-4f11-84dd-effabc20ee8a" (UID: "3c336d83-8666-4f11-84dd-effabc20ee8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:37:02 crc kubenswrapper[4752]: I0122 11:37:02.754446 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6dcd\" (UniqueName: \"kubernetes.io/projected/3c336d83-8666-4f11-84dd-effabc20ee8a-kube-api-access-j6dcd\") on node \"crc\" DevicePath \"\"" Jan 22 11:37:02 crc kubenswrapper[4752]: I0122 11:37:02.754512 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c336d83-8666-4f11-84dd-effabc20ee8a-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:37:02 crc kubenswrapper[4752]: I0122 11:37:02.754530 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c336d83-8666-4f11-84dd-effabc20ee8a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:37:03 crc kubenswrapper[4752]: I0122 11:37:03.328632 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlxt8" event={"ID":"3c336d83-8666-4f11-84dd-effabc20ee8a","Type":"ContainerDied","Data":"6c69f469a1c3d36784441051853f22b45d139b4a8e702961281b91e4a085c833"} Jan 22 11:37:03 crc kubenswrapper[4752]: I0122 11:37:03.328689 4752 scope.go:117] "RemoveContainer" containerID="763bec4b9229f485e5203563458c12dcfef02f17126da5b0417171c5b36faf8b" Jan 22 11:37:03 crc kubenswrapper[4752]: I0122 11:37:03.328711 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wlxt8" Jan 22 11:37:03 crc kubenswrapper[4752]: I0122 11:37:03.350822 4752 scope.go:117] "RemoveContainer" containerID="58ced1f343c02ba137bc7a8e482bea8ade25ce3012af987a39a3a936e070ff9f" Jan 22 11:37:03 crc kubenswrapper[4752]: I0122 11:37:03.364156 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wlxt8"] Jan 22 11:37:03 crc kubenswrapper[4752]: I0122 11:37:03.375897 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wlxt8"] Jan 22 11:37:03 crc kubenswrapper[4752]: I0122 11:37:03.391053 4752 scope.go:117] "RemoveContainer" containerID="8cfeeb602ce3e656438d64bd3e9be95c06406d9c6ec1379c2b0fdf71300d6c09" Jan 22 11:37:05 crc kubenswrapper[4752]: I0122 11:37:05.115268 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c336d83-8666-4f11-84dd-effabc20ee8a" path="/var/lib/kubelet/pods/3c336d83-8666-4f11-84dd-effabc20ee8a/volumes" Jan 22 11:37:27 crc kubenswrapper[4752]: I0122 11:37:27.723760 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:37:27 crc kubenswrapper[4752]: I0122 11:37:27.724463 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:37:57 crc kubenswrapper[4752]: I0122 11:37:57.724901 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:37:57 crc kubenswrapper[4752]: I0122 11:37:57.725369 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:37:57 crc kubenswrapper[4752]: I0122 11:37:57.725425 4752 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 11:37:57 crc kubenswrapper[4752]: I0122 11:37:57.726343 4752 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"da61bc29cc6e108a8b9eff72384c2d68f8124d29398c2fbbcef8ea281fb27923"} pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 11:37:57 crc kubenswrapper[4752]: I0122 11:37:57.726388 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" containerID="cri-o://da61bc29cc6e108a8b9eff72384c2d68f8124d29398c2fbbcef8ea281fb27923" gracePeriod=600 Jan 22 11:37:57 crc kubenswrapper[4752]: I0122 11:37:57.870786 4752 generic.go:334] "Generic (PLEG): container finished" podID="eb8df70c-9474-4827-8831-f39fc6883d79" containerID="da61bc29cc6e108a8b9eff72384c2d68f8124d29398c2fbbcef8ea281fb27923" exitCode=0 Jan 22 11:37:57 crc kubenswrapper[4752]: I0122 11:37:57.870977 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerDied","Data":"da61bc29cc6e108a8b9eff72384c2d68f8124d29398c2fbbcef8ea281fb27923"} Jan 22 11:37:57 crc kubenswrapper[4752]: I0122 11:37:57.871056 4752 scope.go:117] "RemoveContainer" containerID="0a2ab070c16147a868fce7b4d4b9b4f8c7c8a200748a5e0630aaa36e65d64468" Jan 22 11:37:58 crc kubenswrapper[4752]: I0122 11:37:58.882062 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerStarted","Data":"27cdeb1cd11ef1d50e301c3f93a6ba2d19a6b65465fee14030bf079bf16e4924"} Jan 22 11:40:27 crc kubenswrapper[4752]: I0122 11:40:27.723720 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:40:27 crc kubenswrapper[4752]: I0122 11:40:27.724371 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:40:29 crc kubenswrapper[4752]: I0122 11:40:29.760578 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vd4vc"] Jan 22 11:40:29 crc kubenswrapper[4752]: E0122 11:40:29.761282 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c336d83-8666-4f11-84dd-effabc20ee8a" containerName="extract-content" Jan 22 11:40:29 crc kubenswrapper[4752]: I0122 11:40:29.761300 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c336d83-8666-4f11-84dd-effabc20ee8a" containerName="extract-content" Jan 22 11:40:29 crc kubenswrapper[4752]: E0122 11:40:29.761310 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c336d83-8666-4f11-84dd-effabc20ee8a" containerName="extract-utilities" Jan 22 11:40:29 crc kubenswrapper[4752]: I0122 11:40:29.761316 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c336d83-8666-4f11-84dd-effabc20ee8a" containerName="extract-utilities" Jan 22 11:40:29 crc kubenswrapper[4752]: E0122 11:40:29.761328 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c336d83-8666-4f11-84dd-effabc20ee8a" containerName="registry-server" Jan 22 11:40:29 crc kubenswrapper[4752]: I0122 11:40:29.761334 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c336d83-8666-4f11-84dd-effabc20ee8a" containerName="registry-server" Jan 22 11:40:29 crc kubenswrapper[4752]: I0122 11:40:29.761796 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c336d83-8666-4f11-84dd-effabc20ee8a" containerName="registry-server" Jan 22 11:40:29 crc kubenswrapper[4752]: I0122 11:40:29.763229 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vd4vc" Jan 22 11:40:29 crc kubenswrapper[4752]: I0122 11:40:29.794057 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vd4vc"] Jan 22 11:40:29 crc kubenswrapper[4752]: I0122 11:40:29.923458 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/937a4c25-1fa8-4669-b9fd-d626aa269489-catalog-content\") pod \"certified-operators-vd4vc\" (UID: \"937a4c25-1fa8-4669-b9fd-d626aa269489\") " pod="openshift-marketplace/certified-operators-vd4vc" Jan 22 11:40:29 crc kubenswrapper[4752]: I0122 11:40:29.923932 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/937a4c25-1fa8-4669-b9fd-d626aa269489-utilities\") pod \"certified-operators-vd4vc\" (UID: \"937a4c25-1fa8-4669-b9fd-d626aa269489\") " pod="openshift-marketplace/certified-operators-vd4vc" Jan 22 11:40:29 crc kubenswrapper[4752]: I0122 11:40:29.924046 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qqzd\" (UniqueName: \"kubernetes.io/projected/937a4c25-1fa8-4669-b9fd-d626aa269489-kube-api-access-4qqzd\") pod \"certified-operators-vd4vc\" (UID: \"937a4c25-1fa8-4669-b9fd-d626aa269489\") " pod="openshift-marketplace/certified-operators-vd4vc" Jan 22 11:40:30 crc kubenswrapper[4752]: I0122 11:40:30.027144 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/937a4c25-1fa8-4669-b9fd-d626aa269489-utilities\") pod \"certified-operators-vd4vc\" (UID: \"937a4c25-1fa8-4669-b9fd-d626aa269489\") " pod="openshift-marketplace/certified-operators-vd4vc" Jan 22 11:40:30 crc kubenswrapper[4752]: I0122 11:40:30.027292 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qqzd\" (UniqueName: \"kubernetes.io/projected/937a4c25-1fa8-4669-b9fd-d626aa269489-kube-api-access-4qqzd\") pod \"certified-operators-vd4vc\" (UID: \"937a4c25-1fa8-4669-b9fd-d626aa269489\") " pod="openshift-marketplace/certified-operators-vd4vc" Jan 22 11:40:30 crc kubenswrapper[4752]: I0122 11:40:30.027375 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/937a4c25-1fa8-4669-b9fd-d626aa269489-catalog-content\") pod \"certified-operators-vd4vc\" (UID: \"937a4c25-1fa8-4669-b9fd-d626aa269489\") " pod="openshift-marketplace/certified-operators-vd4vc" Jan 22 11:40:30 crc kubenswrapper[4752]: I0122 11:40:30.028316 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/937a4c25-1fa8-4669-b9fd-d626aa269489-catalog-content\") pod \"certified-operators-vd4vc\" (UID: \"937a4c25-1fa8-4669-b9fd-d626aa269489\") " pod="openshift-marketplace/certified-operators-vd4vc" Jan 22 11:40:30 crc kubenswrapper[4752]: I0122 11:40:30.028386 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/937a4c25-1fa8-4669-b9fd-d626aa269489-utilities\") pod \"certified-operators-vd4vc\" (UID: \"937a4c25-1fa8-4669-b9fd-d626aa269489\") " pod="openshift-marketplace/certified-operators-vd4vc" Jan 22 11:40:30 crc kubenswrapper[4752]: I0122 11:40:30.556432 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qqzd\" (UniqueName: \"kubernetes.io/projected/937a4c25-1fa8-4669-b9fd-d626aa269489-kube-api-access-4qqzd\") pod \"certified-operators-vd4vc\" (UID: \"937a4c25-1fa8-4669-b9fd-d626aa269489\") " pod="openshift-marketplace/certified-operators-vd4vc" Jan 22 11:40:30 crc kubenswrapper[4752]: I0122 11:40:30.684554 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vd4vc" Jan 22 11:40:31 crc kubenswrapper[4752]: I0122 11:40:31.231046 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vd4vc"] Jan 22 11:40:31 crc kubenswrapper[4752]: I0122 11:40:31.511968 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vd4vc" event={"ID":"937a4c25-1fa8-4669-b9fd-d626aa269489","Type":"ContainerStarted","Data":"26b1a9c03851bce64ada314e32981149a5ff0197eae783a3dbf3893046ffaceb"} Jan 22 11:40:32 crc kubenswrapper[4752]: I0122 11:40:32.522328 4752 generic.go:334] "Generic (PLEG): container finished" podID="937a4c25-1fa8-4669-b9fd-d626aa269489" containerID="fd9589615331b1cab3f652fc54d4180e83138ffdfea75cc9ea4af8583bc6ab30" exitCode=0 Jan 22 11:40:32 crc kubenswrapper[4752]: I0122 11:40:32.522405 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vd4vc" event={"ID":"937a4c25-1fa8-4669-b9fd-d626aa269489","Type":"ContainerDied","Data":"fd9589615331b1cab3f652fc54d4180e83138ffdfea75cc9ea4af8583bc6ab30"} Jan 22 11:40:33 crc kubenswrapper[4752]: I0122 11:40:33.540600 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vd4vc" event={"ID":"937a4c25-1fa8-4669-b9fd-d626aa269489","Type":"ContainerStarted","Data":"b5d9dace72ab5e5ff59c1674d3b2b78326ac89daa61b7a5ec0ba2df9a505f8e3"} Jan 22 11:40:34 crc kubenswrapper[4752]: I0122 11:40:34.551484 4752 generic.go:334] "Generic (PLEG): container finished" podID="937a4c25-1fa8-4669-b9fd-d626aa269489" containerID="b5d9dace72ab5e5ff59c1674d3b2b78326ac89daa61b7a5ec0ba2df9a505f8e3" exitCode=0 Jan 22 11:40:34 crc kubenswrapper[4752]: I0122 11:40:34.551565 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vd4vc" event={"ID":"937a4c25-1fa8-4669-b9fd-d626aa269489","Type":"ContainerDied","Data":"b5d9dace72ab5e5ff59c1674d3b2b78326ac89daa61b7a5ec0ba2df9a505f8e3"} Jan 22 11:40:35 crc kubenswrapper[4752]: I0122 11:40:35.562949 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vd4vc" event={"ID":"937a4c25-1fa8-4669-b9fd-d626aa269489","Type":"ContainerStarted","Data":"c154df799caf9db49efbf3b8831a45e52b28fb4d09adb98a6d0d75664ec02d49"} Jan 22 11:40:40 crc kubenswrapper[4752]: I0122 11:40:40.684771 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vd4vc" Jan 22 11:40:40 crc kubenswrapper[4752]: I0122 11:40:40.685477 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vd4vc" Jan 22 11:40:40 crc kubenswrapper[4752]: I0122 11:40:40.754738 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vd4vc" Jan 22 11:40:40 crc kubenswrapper[4752]: I0122 11:40:40.778055 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vd4vc" podStartSLOduration=9.386919734 podStartE2EDuration="11.778027808s" podCreationTimestamp="2026-01-22 11:40:29 +0000 UTC" firstStartedPulling="2026-01-22 11:40:32.526411004 +0000 UTC m=+4511.756353922" lastFinishedPulling="2026-01-22 11:40:34.917519078 +0000 UTC m=+4514.147461996" observedRunningTime="2026-01-22 11:40:35.58601673 +0000 UTC m=+4514.815959658" watchObservedRunningTime="2026-01-22 11:40:40.778027808 +0000 UTC m=+4520.007970716" Jan 22 11:40:41 crc kubenswrapper[4752]: I0122 11:40:41.924300 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vd4vc" Jan 22 11:40:43 crc kubenswrapper[4752]: I0122 11:40:43.392815 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vd4vc"] Jan 22 11:40:43 crc kubenswrapper[4752]: I0122 11:40:43.662470 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vd4vc" podUID="937a4c25-1fa8-4669-b9fd-d626aa269489" containerName="registry-server" containerID="cri-o://c154df799caf9db49efbf3b8831a45e52b28fb4d09adb98a6d0d75664ec02d49" gracePeriod=2 Jan 22 11:40:44 crc kubenswrapper[4752]: I0122 11:40:44.278953 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vd4vc" Jan 22 11:40:44 crc kubenswrapper[4752]: I0122 11:40:44.382628 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/937a4c25-1fa8-4669-b9fd-d626aa269489-utilities\") pod \"937a4c25-1fa8-4669-b9fd-d626aa269489\" (UID: \"937a4c25-1fa8-4669-b9fd-d626aa269489\") " Jan 22 11:40:44 crc kubenswrapper[4752]: I0122 11:40:44.383047 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qqzd\" (UniqueName: \"kubernetes.io/projected/937a4c25-1fa8-4669-b9fd-d626aa269489-kube-api-access-4qqzd\") pod \"937a4c25-1fa8-4669-b9fd-d626aa269489\" (UID: \"937a4c25-1fa8-4669-b9fd-d626aa269489\") " Jan 22 11:40:44 crc kubenswrapper[4752]: I0122 11:40:44.383107 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/937a4c25-1fa8-4669-b9fd-d626aa269489-catalog-content\") pod \"937a4c25-1fa8-4669-b9fd-d626aa269489\" (UID: \"937a4c25-1fa8-4669-b9fd-d626aa269489\") " Jan 22 11:40:44 crc kubenswrapper[4752]: I0122 11:40:44.384965 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/937a4c25-1fa8-4669-b9fd-d626aa269489-utilities" (OuterVolumeSpecName: "utilities") pod "937a4c25-1fa8-4669-b9fd-d626aa269489" (UID: "937a4c25-1fa8-4669-b9fd-d626aa269489"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4752]: I0122 11:40:44.390741 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/937a4c25-1fa8-4669-b9fd-d626aa269489-kube-api-access-4qqzd" (OuterVolumeSpecName: "kube-api-access-4qqzd") pod "937a4c25-1fa8-4669-b9fd-d626aa269489" (UID: "937a4c25-1fa8-4669-b9fd-d626aa269489"). InnerVolumeSpecName "kube-api-access-4qqzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4752]: I0122 11:40:44.436841 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/937a4c25-1fa8-4669-b9fd-d626aa269489-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "937a4c25-1fa8-4669-b9fd-d626aa269489" (UID: "937a4c25-1fa8-4669-b9fd-d626aa269489"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4752]: I0122 11:40:44.486325 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/937a4c25-1fa8-4669-b9fd-d626aa269489-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4752]: I0122 11:40:44.486376 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qqzd\" (UniqueName: \"kubernetes.io/projected/937a4c25-1fa8-4669-b9fd-d626aa269489-kube-api-access-4qqzd\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4752]: I0122 11:40:44.486393 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/937a4c25-1fa8-4669-b9fd-d626aa269489-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4752]: I0122 11:40:44.678534 4752 generic.go:334] "Generic (PLEG): container finished" podID="937a4c25-1fa8-4669-b9fd-d626aa269489" containerID="c154df799caf9db49efbf3b8831a45e52b28fb4d09adb98a6d0d75664ec02d49" exitCode=0 Jan 22 11:40:44 crc kubenswrapper[4752]: I0122 11:40:44.678601 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vd4vc" event={"ID":"937a4c25-1fa8-4669-b9fd-d626aa269489","Type":"ContainerDied","Data":"c154df799caf9db49efbf3b8831a45e52b28fb4d09adb98a6d0d75664ec02d49"} Jan 22 11:40:44 crc kubenswrapper[4752]: I0122 11:40:44.678654 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vd4vc" event={"ID":"937a4c25-1fa8-4669-b9fd-d626aa269489","Type":"ContainerDied","Data":"26b1a9c03851bce64ada314e32981149a5ff0197eae783a3dbf3893046ffaceb"} Jan 22 11:40:44 crc kubenswrapper[4752]: I0122 11:40:44.678678 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vd4vc" Jan 22 11:40:44 crc kubenswrapper[4752]: I0122 11:40:44.678689 4752 scope.go:117] "RemoveContainer" containerID="c154df799caf9db49efbf3b8831a45e52b28fb4d09adb98a6d0d75664ec02d49" Jan 22 11:40:44 crc kubenswrapper[4752]: I0122 11:40:44.709116 4752 scope.go:117] "RemoveContainer" containerID="b5d9dace72ab5e5ff59c1674d3b2b78326ac89daa61b7a5ec0ba2df9a505f8e3" Jan 22 11:40:44 crc kubenswrapper[4752]: I0122 11:40:44.744362 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vd4vc"] Jan 22 11:40:44 crc kubenswrapper[4752]: I0122 11:40:44.755743 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vd4vc"] Jan 22 11:40:44 crc kubenswrapper[4752]: I0122 11:40:44.759333 4752 scope.go:117] "RemoveContainer" containerID="fd9589615331b1cab3f652fc54d4180e83138ffdfea75cc9ea4af8583bc6ab30" Jan 22 11:40:44 crc kubenswrapper[4752]: I0122 11:40:44.819352 4752 scope.go:117] "RemoveContainer" containerID="c154df799caf9db49efbf3b8831a45e52b28fb4d09adb98a6d0d75664ec02d49" Jan 22 11:40:44 crc kubenswrapper[4752]: E0122 11:40:44.819952 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c154df799caf9db49efbf3b8831a45e52b28fb4d09adb98a6d0d75664ec02d49\": container with ID starting with c154df799caf9db49efbf3b8831a45e52b28fb4d09adb98a6d0d75664ec02d49 not found: ID does not exist" containerID="c154df799caf9db49efbf3b8831a45e52b28fb4d09adb98a6d0d75664ec02d49" Jan 22 11:40:44 crc kubenswrapper[4752]: I0122 11:40:44.820066 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c154df799caf9db49efbf3b8831a45e52b28fb4d09adb98a6d0d75664ec02d49"} err="failed to get container status \"c154df799caf9db49efbf3b8831a45e52b28fb4d09adb98a6d0d75664ec02d49\": rpc error: code = NotFound desc = could not find container \"c154df799caf9db49efbf3b8831a45e52b28fb4d09adb98a6d0d75664ec02d49\": container with ID starting with c154df799caf9db49efbf3b8831a45e52b28fb4d09adb98a6d0d75664ec02d49 not found: ID does not exist" Jan 22 11:40:44 crc kubenswrapper[4752]: I0122 11:40:44.820160 4752 scope.go:117] "RemoveContainer" containerID="b5d9dace72ab5e5ff59c1674d3b2b78326ac89daa61b7a5ec0ba2df9a505f8e3" Jan 22 11:40:44 crc kubenswrapper[4752]: E0122 11:40:44.820660 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5d9dace72ab5e5ff59c1674d3b2b78326ac89daa61b7a5ec0ba2df9a505f8e3\": container with ID starting with b5d9dace72ab5e5ff59c1674d3b2b78326ac89daa61b7a5ec0ba2df9a505f8e3 not found: ID does not exist" containerID="b5d9dace72ab5e5ff59c1674d3b2b78326ac89daa61b7a5ec0ba2df9a505f8e3" Jan 22 11:40:44 crc kubenswrapper[4752]: I0122 11:40:44.820728 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5d9dace72ab5e5ff59c1674d3b2b78326ac89daa61b7a5ec0ba2df9a505f8e3"} err="failed to get container status \"b5d9dace72ab5e5ff59c1674d3b2b78326ac89daa61b7a5ec0ba2df9a505f8e3\": rpc error: code = NotFound desc = could not find container \"b5d9dace72ab5e5ff59c1674d3b2b78326ac89daa61b7a5ec0ba2df9a505f8e3\": container with ID starting with b5d9dace72ab5e5ff59c1674d3b2b78326ac89daa61b7a5ec0ba2df9a505f8e3 not found: ID does not exist" Jan 22 11:40:44 crc kubenswrapper[4752]: I0122 11:40:44.820776 4752 scope.go:117] "RemoveContainer" containerID="fd9589615331b1cab3f652fc54d4180e83138ffdfea75cc9ea4af8583bc6ab30" Jan 22 11:40:44 crc kubenswrapper[4752]: E0122 11:40:44.821270 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd9589615331b1cab3f652fc54d4180e83138ffdfea75cc9ea4af8583bc6ab30\": container with ID starting with fd9589615331b1cab3f652fc54d4180e83138ffdfea75cc9ea4af8583bc6ab30 not found: ID does not exist" containerID="fd9589615331b1cab3f652fc54d4180e83138ffdfea75cc9ea4af8583bc6ab30" Jan 22 11:40:44 crc kubenswrapper[4752]: I0122 11:40:44.821362 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd9589615331b1cab3f652fc54d4180e83138ffdfea75cc9ea4af8583bc6ab30"} err="failed to get container status \"fd9589615331b1cab3f652fc54d4180e83138ffdfea75cc9ea4af8583bc6ab30\": rpc error: code = NotFound desc = could not find container \"fd9589615331b1cab3f652fc54d4180e83138ffdfea75cc9ea4af8583bc6ab30\": container with ID starting with fd9589615331b1cab3f652fc54d4180e83138ffdfea75cc9ea4af8583bc6ab30 not found: ID does not exist" Jan 22 11:40:45 crc kubenswrapper[4752]: I0122 11:40:45.111218 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="937a4c25-1fa8-4669-b9fd-d626aa269489" path="/var/lib/kubelet/pods/937a4c25-1fa8-4669-b9fd-d626aa269489/volumes" Jan 22 11:40:57 crc kubenswrapper[4752]: I0122 11:40:57.723896 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:40:57 crc kubenswrapper[4752]: I0122 11:40:57.724755 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:41:27 crc kubenswrapper[4752]: I0122 11:41:27.723559 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:41:27 crc kubenswrapper[4752]: I0122 11:41:27.724252 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:41:27 crc kubenswrapper[4752]: I0122 11:41:27.724318 4752 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 11:41:27 crc kubenswrapper[4752]: I0122 11:41:27.725420 4752 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"27cdeb1cd11ef1d50e301c3f93a6ba2d19a6b65465fee14030bf079bf16e4924"} pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 11:41:27 crc kubenswrapper[4752]: I0122 11:41:27.725536 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" containerID="cri-o://27cdeb1cd11ef1d50e301c3f93a6ba2d19a6b65465fee14030bf079bf16e4924" gracePeriod=600 Jan 22 11:41:27 crc kubenswrapper[4752]: E0122 11:41:27.855010 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:41:28 crc kubenswrapper[4752]: I0122 11:41:28.155636 4752 generic.go:334] "Generic (PLEG): container finished" podID="eb8df70c-9474-4827-8831-f39fc6883d79" containerID="27cdeb1cd11ef1d50e301c3f93a6ba2d19a6b65465fee14030bf079bf16e4924" exitCode=0 Jan 22 11:41:28 crc kubenswrapper[4752]: I0122 11:41:28.155679 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerDied","Data":"27cdeb1cd11ef1d50e301c3f93a6ba2d19a6b65465fee14030bf079bf16e4924"} Jan 22 11:41:28 crc kubenswrapper[4752]: I0122 11:41:28.155713 4752 scope.go:117] "RemoveContainer" containerID="da61bc29cc6e108a8b9eff72384c2d68f8124d29398c2fbbcef8ea281fb27923" Jan 22 11:41:28 crc kubenswrapper[4752]: I0122 11:41:28.156267 4752 scope.go:117] "RemoveContainer" containerID="27cdeb1cd11ef1d50e301c3f93a6ba2d19a6b65465fee14030bf079bf16e4924" Jan 22 11:41:28 crc kubenswrapper[4752]: E0122 11:41:28.156661 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:41:40 crc kubenswrapper[4752]: I0122 11:41:40.098424 4752 scope.go:117] "RemoveContainer" containerID="27cdeb1cd11ef1d50e301c3f93a6ba2d19a6b65465fee14030bf079bf16e4924" Jan 22 11:41:40 crc kubenswrapper[4752]: E0122 11:41:40.099817 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:41:55 crc kubenswrapper[4752]: I0122 11:41:55.098537 4752 scope.go:117] "RemoveContainer" containerID="27cdeb1cd11ef1d50e301c3f93a6ba2d19a6b65465fee14030bf079bf16e4924" Jan 22 11:41:55 crc kubenswrapper[4752]: E0122 11:41:55.099781 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:42:07 crc kubenswrapper[4752]: I0122 11:42:07.098150 4752 scope.go:117] "RemoveContainer" containerID="27cdeb1cd11ef1d50e301c3f93a6ba2d19a6b65465fee14030bf079bf16e4924" Jan 22 11:42:07 crc kubenswrapper[4752]: E0122 11:42:07.099011 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:42:18 crc kubenswrapper[4752]: I0122 11:42:18.099607 4752 scope.go:117] "RemoveContainer" containerID="27cdeb1cd11ef1d50e301c3f93a6ba2d19a6b65465fee14030bf079bf16e4924" Jan 22 11:42:18 crc kubenswrapper[4752]: E0122 11:42:18.101236 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:42:31 crc kubenswrapper[4752]: I0122 11:42:31.105109 4752 scope.go:117] "RemoveContainer" containerID="27cdeb1cd11ef1d50e301c3f93a6ba2d19a6b65465fee14030bf079bf16e4924" Jan 22 11:42:31 crc kubenswrapper[4752]: E0122 11:42:31.106104 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:42:43 crc kubenswrapper[4752]: I0122 11:42:43.098850 4752 scope.go:117] "RemoveContainer" containerID="27cdeb1cd11ef1d50e301c3f93a6ba2d19a6b65465fee14030bf079bf16e4924" Jan 22 11:42:43 crc kubenswrapper[4752]: E0122 11:42:43.099876 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:42:54 crc kubenswrapper[4752]: I0122 11:42:54.098117 4752 scope.go:117] "RemoveContainer" containerID="27cdeb1cd11ef1d50e301c3f93a6ba2d19a6b65465fee14030bf079bf16e4924" Jan 22 11:42:54 crc kubenswrapper[4752]: E0122 11:42:54.098982 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:43:07 crc kubenswrapper[4752]: I0122 11:43:07.097674 4752 scope.go:117] "RemoveContainer" containerID="27cdeb1cd11ef1d50e301c3f93a6ba2d19a6b65465fee14030bf079bf16e4924" Jan 22 11:43:07 crc kubenswrapper[4752]: E0122 11:43:07.098369 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:43:19 crc kubenswrapper[4752]: I0122 11:43:19.097940 4752 scope.go:117] "RemoveContainer" containerID="27cdeb1cd11ef1d50e301c3f93a6ba2d19a6b65465fee14030bf079bf16e4924" Jan 22 11:43:19 crc kubenswrapper[4752]: E0122 11:43:19.100123 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:43:31 crc kubenswrapper[4752]: I0122 11:43:31.112233 4752 scope.go:117] "RemoveContainer" containerID="27cdeb1cd11ef1d50e301c3f93a6ba2d19a6b65465fee14030bf079bf16e4924" Jan 22 11:43:31 crc kubenswrapper[4752]: E0122 11:43:31.113322 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:43:41 crc kubenswrapper[4752]: I0122 11:43:41.995121 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rrst2"] Jan 22 11:43:41 crc kubenswrapper[4752]: E0122 11:43:41.996212 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="937a4c25-1fa8-4669-b9fd-d626aa269489" containerName="registry-server" Jan 22 11:43:41 crc kubenswrapper[4752]: I0122 11:43:41.996227 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="937a4c25-1fa8-4669-b9fd-d626aa269489" containerName="registry-server" Jan 22 11:43:41 crc kubenswrapper[4752]: E0122 11:43:41.996246 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="937a4c25-1fa8-4669-b9fd-d626aa269489" containerName="extract-utilities" Jan 22 11:43:41 crc kubenswrapper[4752]: I0122 11:43:41.996258 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="937a4c25-1fa8-4669-b9fd-d626aa269489" containerName="extract-utilities" Jan 22 11:43:41 crc kubenswrapper[4752]: E0122 11:43:41.996291 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="937a4c25-1fa8-4669-b9fd-d626aa269489" containerName="extract-content" Jan 22 11:43:41 crc kubenswrapper[4752]: I0122 11:43:41.996299 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="937a4c25-1fa8-4669-b9fd-d626aa269489" containerName="extract-content" Jan 22 11:43:41 crc kubenswrapper[4752]: I0122 11:43:41.996577 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="937a4c25-1fa8-4669-b9fd-d626aa269489" containerName="registry-server" Jan 22 11:43:42 crc kubenswrapper[4752]: I0122 11:43:41.998522 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rrst2" Jan 22 11:43:42 crc kubenswrapper[4752]: I0122 11:43:42.021288 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rrst2"] Jan 22 11:43:42 crc kubenswrapper[4752]: I0122 11:43:42.085737 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/513be8cd-3070-4f51-bed3-1244d749454c-utilities\") pod \"redhat-operators-rrst2\" (UID: \"513be8cd-3070-4f51-bed3-1244d749454c\") " pod="openshift-marketplace/redhat-operators-rrst2" Jan 22 11:43:42 crc kubenswrapper[4752]: I0122 11:43:42.085904 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz7fk\" (UniqueName: \"kubernetes.io/projected/513be8cd-3070-4f51-bed3-1244d749454c-kube-api-access-bz7fk\") pod \"redhat-operators-rrst2\" (UID: \"513be8cd-3070-4f51-bed3-1244d749454c\") " pod="openshift-marketplace/redhat-operators-rrst2" Jan 22 11:43:42 crc kubenswrapper[4752]: I0122 11:43:42.085942 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/513be8cd-3070-4f51-bed3-1244d749454c-catalog-content\") pod \"redhat-operators-rrst2\" (UID: \"513be8cd-3070-4f51-bed3-1244d749454c\") " pod="openshift-marketplace/redhat-operators-rrst2" Jan 22 11:43:42 crc kubenswrapper[4752]: I0122 11:43:42.189401 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/513be8cd-3070-4f51-bed3-1244d749454c-utilities\") pod \"redhat-operators-rrst2\" (UID: \"513be8cd-3070-4f51-bed3-1244d749454c\") " pod="openshift-marketplace/redhat-operators-rrst2" Jan 22 11:43:42 crc kubenswrapper[4752]: I0122 11:43:42.189562 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz7fk\" (UniqueName: \"kubernetes.io/projected/513be8cd-3070-4f51-bed3-1244d749454c-kube-api-access-bz7fk\") pod \"redhat-operators-rrst2\" (UID: \"513be8cd-3070-4f51-bed3-1244d749454c\") " pod="openshift-marketplace/redhat-operators-rrst2" Jan 22 11:43:42 crc kubenswrapper[4752]: I0122 11:43:42.189626 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/513be8cd-3070-4f51-bed3-1244d749454c-catalog-content\") pod \"redhat-operators-rrst2\" (UID: \"513be8cd-3070-4f51-bed3-1244d749454c\") " pod="openshift-marketplace/redhat-operators-rrst2" Jan 22 11:43:42 crc kubenswrapper[4752]: I0122 11:43:42.190236 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/513be8cd-3070-4f51-bed3-1244d749454c-utilities\") pod \"redhat-operators-rrst2\" (UID: \"513be8cd-3070-4f51-bed3-1244d749454c\") " pod="openshift-marketplace/redhat-operators-rrst2" Jan 22 11:43:42 crc kubenswrapper[4752]: I0122 11:43:42.190605 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/513be8cd-3070-4f51-bed3-1244d749454c-catalog-content\") pod \"redhat-operators-rrst2\" (UID: \"513be8cd-3070-4f51-bed3-1244d749454c\") " pod="openshift-marketplace/redhat-operators-rrst2" Jan 22 11:43:42 crc kubenswrapper[4752]: I0122 11:43:42.216102 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz7fk\" (UniqueName: \"kubernetes.io/projected/513be8cd-3070-4f51-bed3-1244d749454c-kube-api-access-bz7fk\") pod \"redhat-operators-rrst2\" (UID: \"513be8cd-3070-4f51-bed3-1244d749454c\") " pod="openshift-marketplace/redhat-operators-rrst2" Jan 22 11:43:42 crc kubenswrapper[4752]: I0122 11:43:42.325123 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rrst2" Jan 22 11:43:42 crc kubenswrapper[4752]: I0122 11:43:42.905737 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rrst2"] Jan 22 11:43:43 crc kubenswrapper[4752]: I0122 11:43:43.679738 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrst2" event={"ID":"513be8cd-3070-4f51-bed3-1244d749454c","Type":"ContainerStarted","Data":"864f17b807c1fcafcb0143e5879f640ba5ee463c803052e031643dd6c1d15e87"} Jan 22 11:43:43 crc kubenswrapper[4752]: I0122 11:43:43.680314 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrst2" event={"ID":"513be8cd-3070-4f51-bed3-1244d749454c","Type":"ContainerStarted","Data":"3ada4c027818f64aa45a8bb9c715ee346ca0cd6fb8900b5225ba00f94bbb4bc3"} Jan 22 11:43:44 crc kubenswrapper[4752]: I0122 11:43:44.695684 4752 generic.go:334] "Generic (PLEG): container finished" podID="513be8cd-3070-4f51-bed3-1244d749454c" containerID="864f17b807c1fcafcb0143e5879f640ba5ee463c803052e031643dd6c1d15e87" exitCode=0 Jan 22 11:43:44 crc kubenswrapper[4752]: I0122 11:43:44.695808 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrst2" event={"ID":"513be8cd-3070-4f51-bed3-1244d749454c","Type":"ContainerDied","Data":"864f17b807c1fcafcb0143e5879f640ba5ee463c803052e031643dd6c1d15e87"} Jan 22 11:43:44 crc kubenswrapper[4752]: I0122 11:43:44.699835 4752 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 11:43:46 crc kubenswrapper[4752]: I0122 11:43:46.098801 4752 scope.go:117] "RemoveContainer" containerID="27cdeb1cd11ef1d50e301c3f93a6ba2d19a6b65465fee14030bf079bf16e4924" Jan 22 11:43:46 crc kubenswrapper[4752]: E0122 11:43:46.100182 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:43:46 crc kubenswrapper[4752]: I0122 11:43:46.723539 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrst2" event={"ID":"513be8cd-3070-4f51-bed3-1244d749454c","Type":"ContainerStarted","Data":"9eb6469b51ae6f75f19e2d0aa06eedf15b815183c4bb4cd08859e8f91dcca115"} Jan 22 11:43:47 crc kubenswrapper[4752]: I0122 11:43:47.737223 4752 generic.go:334] "Generic (PLEG): container finished" podID="513be8cd-3070-4f51-bed3-1244d749454c" containerID="9eb6469b51ae6f75f19e2d0aa06eedf15b815183c4bb4cd08859e8f91dcca115" exitCode=0 Jan 22 11:43:47 crc kubenswrapper[4752]: I0122 11:43:47.737283 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrst2" event={"ID":"513be8cd-3070-4f51-bed3-1244d749454c","Type":"ContainerDied","Data":"9eb6469b51ae6f75f19e2d0aa06eedf15b815183c4bb4cd08859e8f91dcca115"} Jan 22 11:43:48 crc kubenswrapper[4752]: I0122 11:43:48.758610 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrst2" event={"ID":"513be8cd-3070-4f51-bed3-1244d749454c","Type":"ContainerStarted","Data":"cd8f41a08447d6f8724a275924450116fcbf4194206b243c246bcccde3392755"} Jan 22 11:43:48 crc kubenswrapper[4752]: I0122 11:43:48.787316 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rrst2" podStartSLOduration=4.329364865 podStartE2EDuration="7.787297811s" podCreationTimestamp="2026-01-22 11:43:41 +0000 UTC" firstStartedPulling="2026-01-22 11:43:44.69951504 +0000 UTC m=+4703.929457948" lastFinishedPulling="2026-01-22 11:43:48.157447976 +0000 UTC m=+4707.387390894" observedRunningTime="2026-01-22 11:43:48.782913097 +0000 UTC m=+4708.012856025" watchObservedRunningTime="2026-01-22 11:43:48.787297811 +0000 UTC m=+4708.017240709" Jan 22 11:43:52 crc kubenswrapper[4752]: I0122 11:43:52.326279 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rrst2" Jan 22 11:43:52 crc kubenswrapper[4752]: I0122 11:43:52.327457 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rrst2" Jan 22 11:43:53 crc kubenswrapper[4752]: I0122 11:43:53.608958 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rrst2" podUID="513be8cd-3070-4f51-bed3-1244d749454c" containerName="registry-server" probeResult="failure" output=< Jan 22 11:43:53 crc kubenswrapper[4752]: timeout: failed to connect service ":50051" within 1s Jan 22 11:43:53 crc kubenswrapper[4752]: > Jan 22 11:43:58 crc kubenswrapper[4752]: I0122 11:43:58.098540 4752 scope.go:117] "RemoveContainer" containerID="27cdeb1cd11ef1d50e301c3f93a6ba2d19a6b65465fee14030bf079bf16e4924" Jan 22 11:43:58 crc kubenswrapper[4752]: E0122 11:43:58.099702 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:44:02 crc kubenswrapper[4752]: I0122 11:44:02.395574 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rrst2" Jan 22 11:44:02 crc kubenswrapper[4752]: I0122 11:44:02.465955 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rrst2" Jan 22 11:44:02 crc kubenswrapper[4752]: I0122 11:44:02.642592 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rrst2"] Jan 22 11:44:03 crc kubenswrapper[4752]: I0122 11:44:03.925909 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rrst2" podUID="513be8cd-3070-4f51-bed3-1244d749454c" containerName="registry-server" containerID="cri-o://cd8f41a08447d6f8724a275924450116fcbf4194206b243c246bcccde3392755" gracePeriod=2 Jan 22 11:44:04 crc kubenswrapper[4752]: I0122 11:44:04.951414 4752 generic.go:334] "Generic (PLEG): container finished" podID="513be8cd-3070-4f51-bed3-1244d749454c" containerID="cd8f41a08447d6f8724a275924450116fcbf4194206b243c246bcccde3392755" exitCode=0 Jan 22 11:44:04 crc kubenswrapper[4752]: I0122 11:44:04.951491 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrst2" event={"ID":"513be8cd-3070-4f51-bed3-1244d749454c","Type":"ContainerDied","Data":"cd8f41a08447d6f8724a275924450116fcbf4194206b243c246bcccde3392755"} Jan 22 11:44:04 crc kubenswrapper[4752]: I0122 11:44:04.951931 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrst2" event={"ID":"513be8cd-3070-4f51-bed3-1244d749454c","Type":"ContainerDied","Data":"3ada4c027818f64aa45a8bb9c715ee346ca0cd6fb8900b5225ba00f94bbb4bc3"} Jan 22 11:44:04 crc kubenswrapper[4752]: I0122 11:44:04.952024 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ada4c027818f64aa45a8bb9c715ee346ca0cd6fb8900b5225ba00f94bbb4bc3" Jan 22 11:44:05 crc kubenswrapper[4752]: I0122 11:44:05.024236 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rrst2" Jan 22 11:44:05 crc kubenswrapper[4752]: I0122 11:44:05.120823 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/513be8cd-3070-4f51-bed3-1244d749454c-utilities\") pod \"513be8cd-3070-4f51-bed3-1244d749454c\" (UID: \"513be8cd-3070-4f51-bed3-1244d749454c\") " Jan 22 11:44:05 crc kubenswrapper[4752]: I0122 11:44:05.121394 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/513be8cd-3070-4f51-bed3-1244d749454c-catalog-content\") pod \"513be8cd-3070-4f51-bed3-1244d749454c\" (UID: \"513be8cd-3070-4f51-bed3-1244d749454c\") " Jan 22 11:44:05 crc kubenswrapper[4752]: I0122 11:44:05.121564 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz7fk\" (UniqueName: \"kubernetes.io/projected/513be8cd-3070-4f51-bed3-1244d749454c-kube-api-access-bz7fk\") pod \"513be8cd-3070-4f51-bed3-1244d749454c\" (UID: \"513be8cd-3070-4f51-bed3-1244d749454c\") " Jan 22 11:44:05 crc kubenswrapper[4752]: I0122 11:44:05.122155 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/513be8cd-3070-4f51-bed3-1244d749454c-utilities" (OuterVolumeSpecName: "utilities") pod "513be8cd-3070-4f51-bed3-1244d749454c" (UID: "513be8cd-3070-4f51-bed3-1244d749454c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:44:05 crc kubenswrapper[4752]: I0122 11:44:05.128686 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/513be8cd-3070-4f51-bed3-1244d749454c-kube-api-access-bz7fk" (OuterVolumeSpecName: "kube-api-access-bz7fk") pod "513be8cd-3070-4f51-bed3-1244d749454c" (UID: "513be8cd-3070-4f51-bed3-1244d749454c"). InnerVolumeSpecName "kube-api-access-bz7fk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:44:05 crc kubenswrapper[4752]: I0122 11:44:05.224587 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz7fk\" (UniqueName: \"kubernetes.io/projected/513be8cd-3070-4f51-bed3-1244d749454c-kube-api-access-bz7fk\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:05 crc kubenswrapper[4752]: I0122 11:44:05.224630 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/513be8cd-3070-4f51-bed3-1244d749454c-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:05 crc kubenswrapper[4752]: I0122 11:44:05.270802 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/513be8cd-3070-4f51-bed3-1244d749454c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "513be8cd-3070-4f51-bed3-1244d749454c" (UID: "513be8cd-3070-4f51-bed3-1244d749454c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:44:05 crc kubenswrapper[4752]: I0122 11:44:05.327277 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/513be8cd-3070-4f51-bed3-1244d749454c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:05 crc kubenswrapper[4752]: I0122 11:44:05.959023 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rrst2" Jan 22 11:44:06 crc kubenswrapper[4752]: I0122 11:44:06.011517 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rrst2"] Jan 22 11:44:06 crc kubenswrapper[4752]: I0122 11:44:06.025157 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rrst2"] Jan 22 11:44:06 crc kubenswrapper[4752]: E0122 11:44:06.067150 4752 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod513be8cd_3070_4f51_bed3_1244d749454c.slice\": RecentStats: unable to find data in memory cache]" Jan 22 11:44:07 crc kubenswrapper[4752]: I0122 11:44:07.112912 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="513be8cd-3070-4f51-bed3-1244d749454c" path="/var/lib/kubelet/pods/513be8cd-3070-4f51-bed3-1244d749454c/volumes" Jan 22 11:44:10 crc kubenswrapper[4752]: I0122 11:44:10.098045 4752 scope.go:117] "RemoveContainer" containerID="27cdeb1cd11ef1d50e301c3f93a6ba2d19a6b65465fee14030bf079bf16e4924" Jan 22 11:44:10 crc kubenswrapper[4752]: E0122 11:44:10.098579 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:44:23 crc kubenswrapper[4752]: I0122 11:44:23.098065 4752 scope.go:117] "RemoveContainer" containerID="27cdeb1cd11ef1d50e301c3f93a6ba2d19a6b65465fee14030bf079bf16e4924" Jan 22 11:44:23 crc kubenswrapper[4752]: E0122 11:44:23.118737 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:44:35 crc kubenswrapper[4752]: I0122 11:44:35.224213 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cn7lr"] Jan 22 11:44:35 crc kubenswrapper[4752]: E0122 11:44:35.225589 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="513be8cd-3070-4f51-bed3-1244d749454c" containerName="extract-utilities" Jan 22 11:44:35 crc kubenswrapper[4752]: I0122 11:44:35.225614 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="513be8cd-3070-4f51-bed3-1244d749454c" containerName="extract-utilities" Jan 22 11:44:35 crc kubenswrapper[4752]: E0122 11:44:35.225664 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="513be8cd-3070-4f51-bed3-1244d749454c" containerName="registry-server" Jan 22 11:44:35 crc kubenswrapper[4752]: I0122 11:44:35.225676 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="513be8cd-3070-4f51-bed3-1244d749454c" containerName="registry-server" Jan 22 11:44:35 crc kubenswrapper[4752]: E0122 11:44:35.225711 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="513be8cd-3070-4f51-bed3-1244d749454c" containerName="extract-content" Jan 22 11:44:35 crc kubenswrapper[4752]: I0122 11:44:35.225722 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="513be8cd-3070-4f51-bed3-1244d749454c" containerName="extract-content" Jan 22 11:44:35 crc kubenswrapper[4752]: I0122 11:44:35.226042 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="513be8cd-3070-4f51-bed3-1244d749454c" containerName="registry-server" Jan 22 11:44:35 crc kubenswrapper[4752]: I0122 11:44:35.228459 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn7lr" Jan 22 11:44:35 crc kubenswrapper[4752]: I0122 11:44:35.234759 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgd5p\" (UniqueName: \"kubernetes.io/projected/a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956-kube-api-access-lgd5p\") pod \"community-operators-cn7lr\" (UID: \"a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956\") " pod="openshift-marketplace/community-operators-cn7lr" Jan 22 11:44:35 crc kubenswrapper[4752]: I0122 11:44:35.234973 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956-utilities\") pod \"community-operators-cn7lr\" (UID: \"a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956\") " pod="openshift-marketplace/community-operators-cn7lr" Jan 22 11:44:35 crc kubenswrapper[4752]: I0122 11:44:35.235095 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956-catalog-content\") pod \"community-operators-cn7lr\" (UID: \"a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956\") " pod="openshift-marketplace/community-operators-cn7lr" Jan 22 11:44:35 crc kubenswrapper[4752]: I0122 11:44:35.244091 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cn7lr"] Jan 22 11:44:35 crc kubenswrapper[4752]: I0122 11:44:35.337987 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956-utilities\") pod \"community-operators-cn7lr\" (UID: \"a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956\") " pod="openshift-marketplace/community-operators-cn7lr" Jan 22 11:44:35 crc kubenswrapper[4752]: I0122 11:44:35.338139 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956-catalog-content\") pod \"community-operators-cn7lr\" (UID: \"a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956\") " pod="openshift-marketplace/community-operators-cn7lr" Jan 22 11:44:35 crc kubenswrapper[4752]: I0122 11:44:35.338263 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgd5p\" (UniqueName: \"kubernetes.io/projected/a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956-kube-api-access-lgd5p\") pod \"community-operators-cn7lr\" (UID: \"a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956\") " pod="openshift-marketplace/community-operators-cn7lr" Jan 22 11:44:35 crc kubenswrapper[4752]: I0122 11:44:35.338787 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956-utilities\") pod \"community-operators-cn7lr\" (UID: \"a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956\") " pod="openshift-marketplace/community-operators-cn7lr" Jan 22 11:44:35 crc kubenswrapper[4752]: I0122 11:44:35.338834 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956-catalog-content\") pod \"community-operators-cn7lr\" (UID: \"a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956\") " pod="openshift-marketplace/community-operators-cn7lr" Jan 22 11:44:35 crc kubenswrapper[4752]: I0122 11:44:35.379695 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgd5p\" (UniqueName: \"kubernetes.io/projected/a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956-kube-api-access-lgd5p\") pod \"community-operators-cn7lr\" (UID: \"a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956\") " pod="openshift-marketplace/community-operators-cn7lr" Jan 22 11:44:35 crc kubenswrapper[4752]: I0122 11:44:35.548999 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn7lr" Jan 22 11:44:35 crc kubenswrapper[4752]: I0122 11:44:35.919866 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cn7lr"] Jan 22 11:44:36 crc kubenswrapper[4752]: I0122 11:44:36.300683 4752 generic.go:334] "Generic (PLEG): container finished" podID="a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956" containerID="65159763278e5667b45df835f84f430c10a98921b5d8475e35e02cdfdb1cf7c7" exitCode=0 Jan 22 11:44:36 crc kubenswrapper[4752]: I0122 11:44:36.300754 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn7lr" event={"ID":"a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956","Type":"ContainerDied","Data":"65159763278e5667b45df835f84f430c10a98921b5d8475e35e02cdfdb1cf7c7"} Jan 22 11:44:36 crc kubenswrapper[4752]: I0122 11:44:36.301067 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn7lr" event={"ID":"a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956","Type":"ContainerStarted","Data":"43b8438935b8e280a9c2f328405db85daeeba8696797b66f910066ad50b1e612"} Jan 22 11:44:37 crc kubenswrapper[4752]: I0122 11:44:37.312029 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn7lr" event={"ID":"a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956","Type":"ContainerStarted","Data":"883e24b37d8e4f331cc6c38a850f4d509ce59f12c4c2b3a6bc73811f0d98c80b"} Jan 22 11:44:38 crc kubenswrapper[4752]: I0122 11:44:38.097720 4752 scope.go:117] "RemoveContainer" containerID="27cdeb1cd11ef1d50e301c3f93a6ba2d19a6b65465fee14030bf079bf16e4924" Jan 22 11:44:38 crc kubenswrapper[4752]: E0122 11:44:38.098017 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:44:38 crc kubenswrapper[4752]: I0122 11:44:38.333299 4752 generic.go:334] "Generic (PLEG): container finished" podID="a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956" containerID="883e24b37d8e4f331cc6c38a850f4d509ce59f12c4c2b3a6bc73811f0d98c80b" exitCode=0 Jan 22 11:44:38 crc kubenswrapper[4752]: I0122 11:44:38.333356 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn7lr" event={"ID":"a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956","Type":"ContainerDied","Data":"883e24b37d8e4f331cc6c38a850f4d509ce59f12c4c2b3a6bc73811f0d98c80b"} Jan 22 11:44:39 crc kubenswrapper[4752]: I0122 11:44:39.343366 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn7lr" event={"ID":"a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956","Type":"ContainerStarted","Data":"dba23988c9b4ce31a8241f34a0884c313bc0fbf9a4c891c833e57539a172fbed"} Jan 22 11:44:39 crc kubenswrapper[4752]: I0122 11:44:39.369176 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cn7lr" podStartSLOduration=1.947273219 podStartE2EDuration="4.369151842s" podCreationTimestamp="2026-01-22 11:44:35 +0000 UTC" firstStartedPulling="2026-01-22 11:44:36.302672988 +0000 UTC m=+4755.532615896" lastFinishedPulling="2026-01-22 11:44:38.724551611 +0000 UTC m=+4757.954494519" observedRunningTime="2026-01-22 11:44:39.366156224 +0000 UTC m=+4758.596099142" watchObservedRunningTime="2026-01-22 11:44:39.369151842 +0000 UTC m=+4758.599094750" Jan 22 11:44:45 crc kubenswrapper[4752]: I0122 11:44:45.549931 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cn7lr" Jan 22 11:44:45 crc kubenswrapper[4752]: I0122 11:44:45.550754 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cn7lr" Jan 22 11:44:45 crc kubenswrapper[4752]: I0122 11:44:45.913652 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cn7lr" Jan 22 11:44:46 crc kubenswrapper[4752]: I0122 11:44:46.497342 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cn7lr" Jan 22 11:44:46 crc kubenswrapper[4752]: I0122 11:44:46.555932 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cn7lr"] Jan 22 11:44:48 crc kubenswrapper[4752]: I0122 11:44:48.450604 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cn7lr" podUID="a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956" containerName="registry-server" containerID="cri-o://dba23988c9b4ce31a8241f34a0884c313bc0fbf9a4c891c833e57539a172fbed" gracePeriod=2 Jan 22 11:44:48 crc kubenswrapper[4752]: I0122 11:44:48.945777 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn7lr" Jan 22 11:44:49 crc kubenswrapper[4752]: I0122 11:44:49.038300 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgd5p\" (UniqueName: \"kubernetes.io/projected/a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956-kube-api-access-lgd5p\") pod \"a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956\" (UID: \"a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956\") " Jan 22 11:44:49 crc kubenswrapper[4752]: I0122 11:44:49.038586 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956-catalog-content\") pod \"a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956\" (UID: \"a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956\") " Jan 22 11:44:49 crc kubenswrapper[4752]: I0122 11:44:49.038819 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956-utilities\") pod \"a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956\" (UID: \"a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956\") " Jan 22 11:44:49 crc kubenswrapper[4752]: I0122 11:44:49.041062 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956-utilities" (OuterVolumeSpecName: "utilities") pod "a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956" (UID: "a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:44:49 crc kubenswrapper[4752]: I0122 11:44:49.045057 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956-kube-api-access-lgd5p" (OuterVolumeSpecName: "kube-api-access-lgd5p") pod "a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956" (UID: "a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956"). InnerVolumeSpecName "kube-api-access-lgd5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:44:49 crc kubenswrapper[4752]: I0122 11:44:49.103128 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956" (UID: "a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:44:49 crc kubenswrapper[4752]: I0122 11:44:49.141237 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:49 crc kubenswrapper[4752]: I0122 11:44:49.141280 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgd5p\" (UniqueName: \"kubernetes.io/projected/a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956-kube-api-access-lgd5p\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:49 crc kubenswrapper[4752]: I0122 11:44:49.141301 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:49 crc kubenswrapper[4752]: I0122 11:44:49.466153 4752 generic.go:334] "Generic (PLEG): container finished" podID="a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956" containerID="dba23988c9b4ce31a8241f34a0884c313bc0fbf9a4c891c833e57539a172fbed" exitCode=0 Jan 22 11:44:49 crc kubenswrapper[4752]: I0122 11:44:49.466205 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn7lr" event={"ID":"a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956","Type":"ContainerDied","Data":"dba23988c9b4ce31a8241f34a0884c313bc0fbf9a4c891c833e57539a172fbed"} Jan 22 11:44:49 crc kubenswrapper[4752]: I0122 11:44:49.466239 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn7lr" event={"ID":"a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956","Type":"ContainerDied","Data":"43b8438935b8e280a9c2f328405db85daeeba8696797b66f910066ad50b1e612"} Jan 22 11:44:49 crc kubenswrapper[4752]: I0122 11:44:49.466263 4752 scope.go:117] "RemoveContainer" containerID="dba23988c9b4ce31a8241f34a0884c313bc0fbf9a4c891c833e57539a172fbed" Jan 22 11:44:49 crc kubenswrapper[4752]: I0122 11:44:49.466422 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn7lr" Jan 22 11:44:49 crc kubenswrapper[4752]: I0122 11:44:49.499194 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cn7lr"] Jan 22 11:44:49 crc kubenswrapper[4752]: I0122 11:44:49.504849 4752 scope.go:117] "RemoveContainer" containerID="883e24b37d8e4f331cc6c38a850f4d509ce59f12c4c2b3a6bc73811f0d98c80b" Jan 22 11:44:49 crc kubenswrapper[4752]: I0122 11:44:49.510672 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cn7lr"] Jan 22 11:44:49 crc kubenswrapper[4752]: I0122 11:44:49.549427 4752 scope.go:117] "RemoveContainer" containerID="65159763278e5667b45df835f84f430c10a98921b5d8475e35e02cdfdb1cf7c7" Jan 22 11:44:49 crc kubenswrapper[4752]: I0122 11:44:49.581431 4752 scope.go:117] "RemoveContainer" containerID="dba23988c9b4ce31a8241f34a0884c313bc0fbf9a4c891c833e57539a172fbed" Jan 22 11:44:49 crc kubenswrapper[4752]: E0122 11:44:49.581902 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dba23988c9b4ce31a8241f34a0884c313bc0fbf9a4c891c833e57539a172fbed\": container with ID starting with dba23988c9b4ce31a8241f34a0884c313bc0fbf9a4c891c833e57539a172fbed not found: ID does not exist" containerID="dba23988c9b4ce31a8241f34a0884c313bc0fbf9a4c891c833e57539a172fbed" Jan 22 11:44:49 crc kubenswrapper[4752]: I0122 11:44:49.582052 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dba23988c9b4ce31a8241f34a0884c313bc0fbf9a4c891c833e57539a172fbed"} err="failed to get container status \"dba23988c9b4ce31a8241f34a0884c313bc0fbf9a4c891c833e57539a172fbed\": rpc error: code = NotFound desc = could not find container \"dba23988c9b4ce31a8241f34a0884c313bc0fbf9a4c891c833e57539a172fbed\": container with ID starting with dba23988c9b4ce31a8241f34a0884c313bc0fbf9a4c891c833e57539a172fbed not found: ID does not exist" Jan 22 11:44:49 crc kubenswrapper[4752]: I0122 11:44:49.582176 4752 scope.go:117] "RemoveContainer" containerID="883e24b37d8e4f331cc6c38a850f4d509ce59f12c4c2b3a6bc73811f0d98c80b" Jan 22 11:44:49 crc kubenswrapper[4752]: E0122 11:44:49.582697 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"883e24b37d8e4f331cc6c38a850f4d509ce59f12c4c2b3a6bc73811f0d98c80b\": container with ID starting with 883e24b37d8e4f331cc6c38a850f4d509ce59f12c4c2b3a6bc73811f0d98c80b not found: ID does not exist" containerID="883e24b37d8e4f331cc6c38a850f4d509ce59f12c4c2b3a6bc73811f0d98c80b" Jan 22 11:44:49 crc kubenswrapper[4752]: I0122 11:44:49.582724 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"883e24b37d8e4f331cc6c38a850f4d509ce59f12c4c2b3a6bc73811f0d98c80b"} err="failed to get container status \"883e24b37d8e4f331cc6c38a850f4d509ce59f12c4c2b3a6bc73811f0d98c80b\": rpc error: code = NotFound desc = could not find container \"883e24b37d8e4f331cc6c38a850f4d509ce59f12c4c2b3a6bc73811f0d98c80b\": container with ID starting with 883e24b37d8e4f331cc6c38a850f4d509ce59f12c4c2b3a6bc73811f0d98c80b not found: ID does not exist" Jan 22 11:44:49 crc kubenswrapper[4752]: I0122 11:44:49.582746 4752 scope.go:117] "RemoveContainer" containerID="65159763278e5667b45df835f84f430c10a98921b5d8475e35e02cdfdb1cf7c7" Jan 22 11:44:49 crc kubenswrapper[4752]: E0122 11:44:49.583118 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65159763278e5667b45df835f84f430c10a98921b5d8475e35e02cdfdb1cf7c7\": container with ID starting with 65159763278e5667b45df835f84f430c10a98921b5d8475e35e02cdfdb1cf7c7 not found: ID does not exist" containerID="65159763278e5667b45df835f84f430c10a98921b5d8475e35e02cdfdb1cf7c7" Jan 22 11:44:49 crc kubenswrapper[4752]: I0122 11:44:49.583140 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65159763278e5667b45df835f84f430c10a98921b5d8475e35e02cdfdb1cf7c7"} err="failed to get container status \"65159763278e5667b45df835f84f430c10a98921b5d8475e35e02cdfdb1cf7c7\": rpc error: code = NotFound desc = could not find container \"65159763278e5667b45df835f84f430c10a98921b5d8475e35e02cdfdb1cf7c7\": container with ID starting with 65159763278e5667b45df835f84f430c10a98921b5d8475e35e02cdfdb1cf7c7 not found: ID does not exist" Jan 22 11:44:51 crc kubenswrapper[4752]: I0122 11:44:51.118942 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956" path="/var/lib/kubelet/pods/a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956/volumes" Jan 22 11:44:53 crc kubenswrapper[4752]: I0122 11:44:53.098776 4752 scope.go:117] "RemoveContainer" containerID="27cdeb1cd11ef1d50e301c3f93a6ba2d19a6b65465fee14030bf079bf16e4924" Jan 22 11:44:53 crc kubenswrapper[4752]: E0122 11:44:53.099427 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:45:00 crc kubenswrapper[4752]: I0122 11:45:00.156518 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484705-jvcl9"] Jan 22 11:45:00 crc kubenswrapper[4752]: E0122 11:45:00.157588 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956" containerName="registry-server" Jan 22 11:45:00 crc kubenswrapper[4752]: I0122 11:45:00.157604 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956" containerName="registry-server" Jan 22 11:45:00 crc kubenswrapper[4752]: E0122 11:45:00.157638 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956" containerName="extract-utilities" Jan 22 11:45:00 crc kubenswrapper[4752]: I0122 11:45:00.157646 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956" containerName="extract-utilities" Jan 22 11:45:00 crc kubenswrapper[4752]: E0122 11:45:00.157660 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956" containerName="extract-content" Jan 22 11:45:00 crc kubenswrapper[4752]: I0122 11:45:00.157668 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956" containerName="extract-content" Jan 22 11:45:00 crc kubenswrapper[4752]: I0122 11:45:00.157896 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1b3d41c-f6bb-4b56-a27e-0c89ae7bb956" containerName="registry-server" Jan 22 11:45:00 crc kubenswrapper[4752]: I0122 11:45:00.158738 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-jvcl9" Jan 22 11:45:00 crc kubenswrapper[4752]: I0122 11:45:00.161494 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 11:45:00 crc kubenswrapper[4752]: I0122 11:45:00.173415 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484705-jvcl9"] Jan 22 11:45:00 crc kubenswrapper[4752]: I0122 11:45:00.178500 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 11:45:00 crc kubenswrapper[4752]: I0122 11:45:00.292072 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw9nr\" (UniqueName: \"kubernetes.io/projected/e2313ce1-0846-4187-89b5-4b0dab59cc0c-kube-api-access-jw9nr\") pod \"collect-profiles-29484705-jvcl9\" (UID: \"e2313ce1-0846-4187-89b5-4b0dab59cc0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-jvcl9" Jan 22 11:45:00 crc kubenswrapper[4752]: I0122 11:45:00.292167 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2313ce1-0846-4187-89b5-4b0dab59cc0c-secret-volume\") pod \"collect-profiles-29484705-jvcl9\" (UID: \"e2313ce1-0846-4187-89b5-4b0dab59cc0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-jvcl9" Jan 22 11:45:00 crc kubenswrapper[4752]: I0122 11:45:00.292249 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2313ce1-0846-4187-89b5-4b0dab59cc0c-config-volume\") pod \"collect-profiles-29484705-jvcl9\" (UID: \"e2313ce1-0846-4187-89b5-4b0dab59cc0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-jvcl9" Jan 22 11:45:00 crc kubenswrapper[4752]: I0122 11:45:00.395172 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw9nr\" (UniqueName: \"kubernetes.io/projected/e2313ce1-0846-4187-89b5-4b0dab59cc0c-kube-api-access-jw9nr\") pod \"collect-profiles-29484705-jvcl9\" (UID: \"e2313ce1-0846-4187-89b5-4b0dab59cc0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-jvcl9" Jan 22 11:45:00 crc kubenswrapper[4752]: I0122 11:45:00.395626 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2313ce1-0846-4187-89b5-4b0dab59cc0c-secret-volume\") pod \"collect-profiles-29484705-jvcl9\" (UID: \"e2313ce1-0846-4187-89b5-4b0dab59cc0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-jvcl9" Jan 22 11:45:00 crc kubenswrapper[4752]: I0122 11:45:00.395710 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2313ce1-0846-4187-89b5-4b0dab59cc0c-config-volume\") pod \"collect-profiles-29484705-jvcl9\" (UID: \"e2313ce1-0846-4187-89b5-4b0dab59cc0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-jvcl9" Jan 22 11:45:00 crc kubenswrapper[4752]: I0122 11:45:00.397251 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2313ce1-0846-4187-89b5-4b0dab59cc0c-config-volume\") pod \"collect-profiles-29484705-jvcl9\" (UID: \"e2313ce1-0846-4187-89b5-4b0dab59cc0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-jvcl9" Jan 22 11:45:00 crc kubenswrapper[4752]: I0122 11:45:00.408015 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2313ce1-0846-4187-89b5-4b0dab59cc0c-secret-volume\") pod \"collect-profiles-29484705-jvcl9\" (UID: \"e2313ce1-0846-4187-89b5-4b0dab59cc0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-jvcl9" Jan 22 11:45:00 crc kubenswrapper[4752]: I0122 11:45:00.419736 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw9nr\" (UniqueName: \"kubernetes.io/projected/e2313ce1-0846-4187-89b5-4b0dab59cc0c-kube-api-access-jw9nr\") pod \"collect-profiles-29484705-jvcl9\" (UID: \"e2313ce1-0846-4187-89b5-4b0dab59cc0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-jvcl9" Jan 22 11:45:00 crc kubenswrapper[4752]: I0122 11:45:00.490481 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-jvcl9" Jan 22 11:45:01 crc kubenswrapper[4752]: I0122 11:45:01.002722 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484705-jvcl9"] Jan 22 11:45:01 crc kubenswrapper[4752]: I0122 11:45:01.598628 4752 generic.go:334] "Generic (PLEG): container finished" podID="e2313ce1-0846-4187-89b5-4b0dab59cc0c" containerID="91215b79f893fe6a9da1c21cb3bb5ba54654caa004fc392b0e2fb74387761280" exitCode=0 Jan 22 11:45:01 crc kubenswrapper[4752]: I0122 11:45:01.598736 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-jvcl9" event={"ID":"e2313ce1-0846-4187-89b5-4b0dab59cc0c","Type":"ContainerDied","Data":"91215b79f893fe6a9da1c21cb3bb5ba54654caa004fc392b0e2fb74387761280"} Jan 22 11:45:01 crc kubenswrapper[4752]: I0122 11:45:01.599238 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-jvcl9" event={"ID":"e2313ce1-0846-4187-89b5-4b0dab59cc0c","Type":"ContainerStarted","Data":"7399c3709f835b9775bd81d9b76912ab178a5cd5d9c3079e56735eede53997a0"} Jan 22 11:45:03 crc kubenswrapper[4752]: I0122 11:45:03.036602 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-jvcl9" Jan 22 11:45:03 crc kubenswrapper[4752]: I0122 11:45:03.170866 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2313ce1-0846-4187-89b5-4b0dab59cc0c-secret-volume\") pod \"e2313ce1-0846-4187-89b5-4b0dab59cc0c\" (UID: \"e2313ce1-0846-4187-89b5-4b0dab59cc0c\") " Jan 22 11:45:03 crc kubenswrapper[4752]: I0122 11:45:03.171095 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2313ce1-0846-4187-89b5-4b0dab59cc0c-config-volume\") pod \"e2313ce1-0846-4187-89b5-4b0dab59cc0c\" (UID: \"e2313ce1-0846-4187-89b5-4b0dab59cc0c\") " Jan 22 11:45:03 crc kubenswrapper[4752]: I0122 11:45:03.171299 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw9nr\" (UniqueName: \"kubernetes.io/projected/e2313ce1-0846-4187-89b5-4b0dab59cc0c-kube-api-access-jw9nr\") pod \"e2313ce1-0846-4187-89b5-4b0dab59cc0c\" (UID: \"e2313ce1-0846-4187-89b5-4b0dab59cc0c\") " Jan 22 11:45:03 crc kubenswrapper[4752]: I0122 11:45:03.172395 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2313ce1-0846-4187-89b5-4b0dab59cc0c-config-volume" (OuterVolumeSpecName: "config-volume") pod "e2313ce1-0846-4187-89b5-4b0dab59cc0c" (UID: "e2313ce1-0846-4187-89b5-4b0dab59cc0c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:45:03 crc kubenswrapper[4752]: I0122 11:45:03.179340 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2313ce1-0846-4187-89b5-4b0dab59cc0c-kube-api-access-jw9nr" (OuterVolumeSpecName: "kube-api-access-jw9nr") pod "e2313ce1-0846-4187-89b5-4b0dab59cc0c" (UID: "e2313ce1-0846-4187-89b5-4b0dab59cc0c"). InnerVolumeSpecName "kube-api-access-jw9nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:45:03 crc kubenswrapper[4752]: I0122 11:45:03.185002 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2313ce1-0846-4187-89b5-4b0dab59cc0c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e2313ce1-0846-4187-89b5-4b0dab59cc0c" (UID: "e2313ce1-0846-4187-89b5-4b0dab59cc0c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:45:03 crc kubenswrapper[4752]: I0122 11:45:03.274961 4752 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2313ce1-0846-4187-89b5-4b0dab59cc0c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:03 crc kubenswrapper[4752]: I0122 11:45:03.274999 4752 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2313ce1-0846-4187-89b5-4b0dab59cc0c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:03 crc kubenswrapper[4752]: I0122 11:45:03.275009 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jw9nr\" (UniqueName: \"kubernetes.io/projected/e2313ce1-0846-4187-89b5-4b0dab59cc0c-kube-api-access-jw9nr\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:03 crc kubenswrapper[4752]: I0122 11:45:03.622395 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-jvcl9" event={"ID":"e2313ce1-0846-4187-89b5-4b0dab59cc0c","Type":"ContainerDied","Data":"7399c3709f835b9775bd81d9b76912ab178a5cd5d9c3079e56735eede53997a0"} Jan 22 11:45:03 crc kubenswrapper[4752]: I0122 11:45:03.622511 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7399c3709f835b9775bd81d9b76912ab178a5cd5d9c3079e56735eede53997a0" Jan 22 11:45:03 crc kubenswrapper[4752]: I0122 11:45:03.622421 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-jvcl9" Jan 22 11:45:04 crc kubenswrapper[4752]: I0122 11:45:04.128435 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484660-6qwwd"] Jan 22 11:45:04 crc kubenswrapper[4752]: I0122 11:45:04.138656 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484660-6qwwd"] Jan 22 11:45:05 crc kubenswrapper[4752]: I0122 11:45:05.112072 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e61af5a-ef99-48d5-9d12-8e3ad639a94f" path="/var/lib/kubelet/pods/4e61af5a-ef99-48d5-9d12-8e3ad639a94f/volumes" Jan 22 11:45:08 crc kubenswrapper[4752]: I0122 11:45:08.099177 4752 scope.go:117] "RemoveContainer" containerID="27cdeb1cd11ef1d50e301c3f93a6ba2d19a6b65465fee14030bf079bf16e4924" Jan 22 11:45:08 crc kubenswrapper[4752]: E0122 11:45:08.100127 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:45:23 crc kubenswrapper[4752]: I0122 11:45:23.097950 4752 scope.go:117] "RemoveContainer" containerID="27cdeb1cd11ef1d50e301c3f93a6ba2d19a6b65465fee14030bf079bf16e4924" Jan 22 11:45:23 crc kubenswrapper[4752]: E0122 11:45:23.098716 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:45:36 crc kubenswrapper[4752]: I0122 11:45:36.098016 4752 scope.go:117] "RemoveContainer" containerID="27cdeb1cd11ef1d50e301c3f93a6ba2d19a6b65465fee14030bf079bf16e4924" Jan 22 11:45:36 crc kubenswrapper[4752]: E0122 11:45:36.098844 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:45:47 crc kubenswrapper[4752]: I0122 11:45:47.098153 4752 scope.go:117] "RemoveContainer" containerID="27cdeb1cd11ef1d50e301c3f93a6ba2d19a6b65465fee14030bf079bf16e4924" Jan 22 11:45:47 crc kubenswrapper[4752]: E0122 11:45:47.099093 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:45:47 crc kubenswrapper[4752]: I0122 11:45:47.705751 4752 scope.go:117] "RemoveContainer" containerID="6e8daad72678adf1d19b47f243e1d700f1f66093fe4f2ce9a81aa25422f40e32" Jan 22 11:46:01 crc kubenswrapper[4752]: I0122 11:46:01.109508 4752 scope.go:117] "RemoveContainer" containerID="27cdeb1cd11ef1d50e301c3f93a6ba2d19a6b65465fee14030bf079bf16e4924" Jan 22 11:46:01 crc kubenswrapper[4752]: E0122 11:46:01.110639 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:46:16 crc kubenswrapper[4752]: I0122 11:46:16.098376 4752 scope.go:117] "RemoveContainer" containerID="27cdeb1cd11ef1d50e301c3f93a6ba2d19a6b65465fee14030bf079bf16e4924" Jan 22 11:46:16 crc kubenswrapper[4752]: E0122 11:46:16.099360 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:46:28 crc kubenswrapper[4752]: I0122 11:46:28.098645 4752 scope.go:117] "RemoveContainer" containerID="27cdeb1cd11ef1d50e301c3f93a6ba2d19a6b65465fee14030bf079bf16e4924" Jan 22 11:46:28 crc kubenswrapper[4752]: I0122 11:46:28.469981 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerStarted","Data":"7de9cf776e424959bcf40532860d0d02617c023f0ddfc34f688b7ddbe13dc305"} Jan 22 11:47:38 crc kubenswrapper[4752]: I0122 11:47:38.340658 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xpgzf"] Jan 22 11:47:38 crc kubenswrapper[4752]: E0122 11:47:38.341689 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2313ce1-0846-4187-89b5-4b0dab59cc0c" containerName="collect-profiles" Jan 22 11:47:38 crc kubenswrapper[4752]: I0122 11:47:38.341705 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2313ce1-0846-4187-89b5-4b0dab59cc0c" containerName="collect-profiles" Jan 22 11:47:38 crc kubenswrapper[4752]: I0122 11:47:38.341995 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2313ce1-0846-4187-89b5-4b0dab59cc0c" containerName="collect-profiles" Jan 22 11:47:38 crc kubenswrapper[4752]: I0122 11:47:38.343796 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xpgzf" Jan 22 11:47:38 crc kubenswrapper[4752]: I0122 11:47:38.359624 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xpgzf"] Jan 22 11:47:38 crc kubenswrapper[4752]: I0122 11:47:38.413096 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsdxt\" (UniqueName: \"kubernetes.io/projected/d584ad91-e309-4b9b-bb25-d5a6e80e8e9a-kube-api-access-jsdxt\") pod \"redhat-marketplace-xpgzf\" (UID: \"d584ad91-e309-4b9b-bb25-d5a6e80e8e9a\") " pod="openshift-marketplace/redhat-marketplace-xpgzf" Jan 22 11:47:38 crc kubenswrapper[4752]: I0122 11:47:38.413179 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d584ad91-e309-4b9b-bb25-d5a6e80e8e9a-catalog-content\") pod \"redhat-marketplace-xpgzf\" (UID: \"d584ad91-e309-4b9b-bb25-d5a6e80e8e9a\") " pod="openshift-marketplace/redhat-marketplace-xpgzf" Jan 22 11:47:38 crc kubenswrapper[4752]: I0122 11:47:38.413326 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d584ad91-e309-4b9b-bb25-d5a6e80e8e9a-utilities\") pod \"redhat-marketplace-xpgzf\" (UID: \"d584ad91-e309-4b9b-bb25-d5a6e80e8e9a\") " pod="openshift-marketplace/redhat-marketplace-xpgzf" Jan 22 11:47:38 crc kubenswrapper[4752]: I0122 11:47:38.515382 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsdxt\" (UniqueName: \"kubernetes.io/projected/d584ad91-e309-4b9b-bb25-d5a6e80e8e9a-kube-api-access-jsdxt\") pod \"redhat-marketplace-xpgzf\" (UID: \"d584ad91-e309-4b9b-bb25-d5a6e80e8e9a\") " pod="openshift-marketplace/redhat-marketplace-xpgzf" Jan 22 11:47:38 crc kubenswrapper[4752]: I0122 11:47:38.515493 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d584ad91-e309-4b9b-bb25-d5a6e80e8e9a-catalog-content\") pod \"redhat-marketplace-xpgzf\" (UID: \"d584ad91-e309-4b9b-bb25-d5a6e80e8e9a\") " pod="openshift-marketplace/redhat-marketplace-xpgzf" Jan 22 11:47:38 crc kubenswrapper[4752]: I0122 11:47:38.515536 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d584ad91-e309-4b9b-bb25-d5a6e80e8e9a-utilities\") pod \"redhat-marketplace-xpgzf\" (UID: \"d584ad91-e309-4b9b-bb25-d5a6e80e8e9a\") " pod="openshift-marketplace/redhat-marketplace-xpgzf" Jan 22 11:47:38 crc kubenswrapper[4752]: I0122 11:47:38.516169 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d584ad91-e309-4b9b-bb25-d5a6e80e8e9a-utilities\") pod \"redhat-marketplace-xpgzf\" (UID: \"d584ad91-e309-4b9b-bb25-d5a6e80e8e9a\") " pod="openshift-marketplace/redhat-marketplace-xpgzf" Jan 22 11:47:38 crc kubenswrapper[4752]: I0122 11:47:38.516754 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d584ad91-e309-4b9b-bb25-d5a6e80e8e9a-catalog-content\") pod \"redhat-marketplace-xpgzf\" (UID: \"d584ad91-e309-4b9b-bb25-d5a6e80e8e9a\") " pod="openshift-marketplace/redhat-marketplace-xpgzf" Jan 22 11:47:38 crc kubenswrapper[4752]: I0122 11:47:38.539813 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsdxt\" (UniqueName: \"kubernetes.io/projected/d584ad91-e309-4b9b-bb25-d5a6e80e8e9a-kube-api-access-jsdxt\") pod \"redhat-marketplace-xpgzf\" (UID: \"d584ad91-e309-4b9b-bb25-d5a6e80e8e9a\") " pod="openshift-marketplace/redhat-marketplace-xpgzf" Jan 22 11:47:38 crc kubenswrapper[4752]: I0122 11:47:38.672976 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xpgzf" Jan 22 11:47:39 crc kubenswrapper[4752]: I0122 11:47:39.177458 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xpgzf"] Jan 22 11:47:40 crc kubenswrapper[4752]: I0122 11:47:40.179123 4752 generic.go:334] "Generic (PLEG): container finished" podID="d584ad91-e309-4b9b-bb25-d5a6e80e8e9a" containerID="de429aecfe70039d38b79b547591c4297a4c134039354a7e25560d0108aac4db" exitCode=0 Jan 22 11:47:40 crc kubenswrapper[4752]: I0122 11:47:40.179183 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xpgzf" event={"ID":"d584ad91-e309-4b9b-bb25-d5a6e80e8e9a","Type":"ContainerDied","Data":"de429aecfe70039d38b79b547591c4297a4c134039354a7e25560d0108aac4db"} Jan 22 11:47:40 crc kubenswrapper[4752]: I0122 11:47:40.179249 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xpgzf" event={"ID":"d584ad91-e309-4b9b-bb25-d5a6e80e8e9a","Type":"ContainerStarted","Data":"ce33f2bc1db06751bde5262edcdcca619572c4dbe85fcba82365711ce41273f5"} Jan 22 11:47:41 crc kubenswrapper[4752]: I0122 11:47:41.189677 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xpgzf" event={"ID":"d584ad91-e309-4b9b-bb25-d5a6e80e8e9a","Type":"ContainerStarted","Data":"b7a964765c391c16d7b6493567c0f7c5573bec8f917789c01e9813dff5a550b9"} Jan 22 11:47:42 crc kubenswrapper[4752]: I0122 11:47:42.198776 4752 generic.go:334] "Generic (PLEG): container finished" podID="d584ad91-e309-4b9b-bb25-d5a6e80e8e9a" containerID="b7a964765c391c16d7b6493567c0f7c5573bec8f917789c01e9813dff5a550b9" exitCode=0 Jan 22 11:47:42 crc kubenswrapper[4752]: I0122 11:47:42.198850 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xpgzf" event={"ID":"d584ad91-e309-4b9b-bb25-d5a6e80e8e9a","Type":"ContainerDied","Data":"b7a964765c391c16d7b6493567c0f7c5573bec8f917789c01e9813dff5a550b9"} Jan 22 11:47:43 crc kubenswrapper[4752]: I0122 11:47:43.208553 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xpgzf" event={"ID":"d584ad91-e309-4b9b-bb25-d5a6e80e8e9a","Type":"ContainerStarted","Data":"ebf24e973e4daebfdf43df34aa9ea2cf70c6ce85333c4e52590122edca20ff92"} Jan 22 11:47:43 crc kubenswrapper[4752]: I0122 11:47:43.237494 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xpgzf" podStartSLOduration=2.811839666 podStartE2EDuration="5.237474787s" podCreationTimestamp="2026-01-22 11:47:38 +0000 UTC" firstStartedPulling="2026-01-22 11:47:40.18122855 +0000 UTC m=+4939.411171458" lastFinishedPulling="2026-01-22 11:47:42.606863671 +0000 UTC m=+4941.836806579" observedRunningTime="2026-01-22 11:47:43.231197983 +0000 UTC m=+4942.461140891" watchObservedRunningTime="2026-01-22 11:47:43.237474787 +0000 UTC m=+4942.467417695" Jan 22 11:47:48 crc kubenswrapper[4752]: I0122 11:47:48.673651 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xpgzf" Jan 22 11:47:48 crc kubenswrapper[4752]: I0122 11:47:48.674181 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xpgzf" Jan 22 11:47:48 crc kubenswrapper[4752]: I0122 11:47:48.720913 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xpgzf" Jan 22 11:47:49 crc kubenswrapper[4752]: I0122 11:47:49.312450 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xpgzf" Jan 22 11:47:49 crc kubenswrapper[4752]: I0122 11:47:49.379456 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xpgzf"] Jan 22 11:47:51 crc kubenswrapper[4752]: I0122 11:47:51.292624 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xpgzf" podUID="d584ad91-e309-4b9b-bb25-d5a6e80e8e9a" containerName="registry-server" containerID="cri-o://ebf24e973e4daebfdf43df34aa9ea2cf70c6ce85333c4e52590122edca20ff92" gracePeriod=2 Jan 22 11:47:51 crc kubenswrapper[4752]: I0122 11:47:51.940336 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xpgzf" Jan 22 11:47:51 crc kubenswrapper[4752]: I0122 11:47:51.999755 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsdxt\" (UniqueName: \"kubernetes.io/projected/d584ad91-e309-4b9b-bb25-d5a6e80e8e9a-kube-api-access-jsdxt\") pod \"d584ad91-e309-4b9b-bb25-d5a6e80e8e9a\" (UID: \"d584ad91-e309-4b9b-bb25-d5a6e80e8e9a\") " Jan 22 11:47:52 crc kubenswrapper[4752]: I0122 11:47:52.000132 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d584ad91-e309-4b9b-bb25-d5a6e80e8e9a-utilities\") pod \"d584ad91-e309-4b9b-bb25-d5a6e80e8e9a\" (UID: \"d584ad91-e309-4b9b-bb25-d5a6e80e8e9a\") " Jan 22 11:47:52 crc kubenswrapper[4752]: I0122 11:47:52.000337 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d584ad91-e309-4b9b-bb25-d5a6e80e8e9a-catalog-content\") pod \"d584ad91-e309-4b9b-bb25-d5a6e80e8e9a\" (UID: \"d584ad91-e309-4b9b-bb25-d5a6e80e8e9a\") " Jan 22 11:47:52 crc kubenswrapper[4752]: I0122 11:47:52.002449 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d584ad91-e309-4b9b-bb25-d5a6e80e8e9a-utilities" (OuterVolumeSpecName: "utilities") pod "d584ad91-e309-4b9b-bb25-d5a6e80e8e9a" (UID: "d584ad91-e309-4b9b-bb25-d5a6e80e8e9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:47:52 crc kubenswrapper[4752]: I0122 11:47:52.007117 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d584ad91-e309-4b9b-bb25-d5a6e80e8e9a-kube-api-access-jsdxt" (OuterVolumeSpecName: "kube-api-access-jsdxt") pod "d584ad91-e309-4b9b-bb25-d5a6e80e8e9a" (UID: "d584ad91-e309-4b9b-bb25-d5a6e80e8e9a"). InnerVolumeSpecName "kube-api-access-jsdxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:47:52 crc kubenswrapper[4752]: I0122 11:47:52.023318 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d584ad91-e309-4b9b-bb25-d5a6e80e8e9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d584ad91-e309-4b9b-bb25-d5a6e80e8e9a" (UID: "d584ad91-e309-4b9b-bb25-d5a6e80e8e9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:47:52 crc kubenswrapper[4752]: I0122 11:47:52.103489 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d584ad91-e309-4b9b-bb25-d5a6e80e8e9a-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:47:52 crc kubenswrapper[4752]: I0122 11:47:52.104270 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d584ad91-e309-4b9b-bb25-d5a6e80e8e9a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:47:52 crc kubenswrapper[4752]: I0122 11:47:52.104865 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsdxt\" (UniqueName: \"kubernetes.io/projected/d584ad91-e309-4b9b-bb25-d5a6e80e8e9a-kube-api-access-jsdxt\") on node \"crc\" DevicePath \"\"" Jan 22 11:47:52 crc kubenswrapper[4752]: I0122 11:47:52.306261 4752 generic.go:334] "Generic (PLEG): container finished" podID="d584ad91-e309-4b9b-bb25-d5a6e80e8e9a" containerID="ebf24e973e4daebfdf43df34aa9ea2cf70c6ce85333c4e52590122edca20ff92" exitCode=0 Jan 22 11:47:52 crc kubenswrapper[4752]: I0122 11:47:52.306373 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xpgzf" Jan 22 11:47:52 crc kubenswrapper[4752]: I0122 11:47:52.306365 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xpgzf" event={"ID":"d584ad91-e309-4b9b-bb25-d5a6e80e8e9a","Type":"ContainerDied","Data":"ebf24e973e4daebfdf43df34aa9ea2cf70c6ce85333c4e52590122edca20ff92"} Jan 22 11:47:52 crc kubenswrapper[4752]: I0122 11:47:52.306463 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xpgzf" event={"ID":"d584ad91-e309-4b9b-bb25-d5a6e80e8e9a","Type":"ContainerDied","Data":"ce33f2bc1db06751bde5262edcdcca619572c4dbe85fcba82365711ce41273f5"} Jan 22 11:47:52 crc kubenswrapper[4752]: I0122 11:47:52.306501 4752 scope.go:117] "RemoveContainer" containerID="ebf24e973e4daebfdf43df34aa9ea2cf70c6ce85333c4e52590122edca20ff92" Jan 22 11:47:52 crc kubenswrapper[4752]: I0122 11:47:52.345184 4752 scope.go:117] "RemoveContainer" containerID="b7a964765c391c16d7b6493567c0f7c5573bec8f917789c01e9813dff5a550b9" Jan 22 11:47:52 crc kubenswrapper[4752]: I0122 11:47:52.348389 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xpgzf"] Jan 22 11:47:52 crc kubenswrapper[4752]: I0122 11:47:52.374837 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xpgzf"] Jan 22 11:47:52 crc kubenswrapper[4752]: I0122 11:47:52.382606 4752 scope.go:117] "RemoveContainer" containerID="de429aecfe70039d38b79b547591c4297a4c134039354a7e25560d0108aac4db" Jan 22 11:47:52 crc kubenswrapper[4752]: I0122 11:47:52.425734 4752 scope.go:117] "RemoveContainer" containerID="ebf24e973e4daebfdf43df34aa9ea2cf70c6ce85333c4e52590122edca20ff92" Jan 22 11:47:52 crc kubenswrapper[4752]: E0122 11:47:52.426496 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebf24e973e4daebfdf43df34aa9ea2cf70c6ce85333c4e52590122edca20ff92\": container with ID starting with ebf24e973e4daebfdf43df34aa9ea2cf70c6ce85333c4e52590122edca20ff92 not found: ID does not exist" containerID="ebf24e973e4daebfdf43df34aa9ea2cf70c6ce85333c4e52590122edca20ff92" Jan 22 11:47:52 crc kubenswrapper[4752]: I0122 11:47:52.426570 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebf24e973e4daebfdf43df34aa9ea2cf70c6ce85333c4e52590122edca20ff92"} err="failed to get container status \"ebf24e973e4daebfdf43df34aa9ea2cf70c6ce85333c4e52590122edca20ff92\": rpc error: code = NotFound desc = could not find container \"ebf24e973e4daebfdf43df34aa9ea2cf70c6ce85333c4e52590122edca20ff92\": container with ID starting with ebf24e973e4daebfdf43df34aa9ea2cf70c6ce85333c4e52590122edca20ff92 not found: ID does not exist" Jan 22 11:47:52 crc kubenswrapper[4752]: I0122 11:47:52.426607 4752 scope.go:117] "RemoveContainer" containerID="b7a964765c391c16d7b6493567c0f7c5573bec8f917789c01e9813dff5a550b9" Jan 22 11:47:52 crc kubenswrapper[4752]: E0122 11:47:52.426995 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7a964765c391c16d7b6493567c0f7c5573bec8f917789c01e9813dff5a550b9\": container with ID starting with b7a964765c391c16d7b6493567c0f7c5573bec8f917789c01e9813dff5a550b9 not found: ID does not exist" containerID="b7a964765c391c16d7b6493567c0f7c5573bec8f917789c01e9813dff5a550b9" Jan 22 11:47:52 crc kubenswrapper[4752]: I0122 11:47:52.427028 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7a964765c391c16d7b6493567c0f7c5573bec8f917789c01e9813dff5a550b9"} err="failed to get container status \"b7a964765c391c16d7b6493567c0f7c5573bec8f917789c01e9813dff5a550b9\": rpc error: code = NotFound desc = could not find container \"b7a964765c391c16d7b6493567c0f7c5573bec8f917789c01e9813dff5a550b9\": container with ID starting with b7a964765c391c16d7b6493567c0f7c5573bec8f917789c01e9813dff5a550b9 not found: ID does not exist" Jan 22 11:47:52 crc kubenswrapper[4752]: I0122 11:47:52.427045 4752 scope.go:117] "RemoveContainer" containerID="de429aecfe70039d38b79b547591c4297a4c134039354a7e25560d0108aac4db" Jan 22 11:47:52 crc kubenswrapper[4752]: E0122 11:47:52.427473 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de429aecfe70039d38b79b547591c4297a4c134039354a7e25560d0108aac4db\": container with ID starting with de429aecfe70039d38b79b547591c4297a4c134039354a7e25560d0108aac4db not found: ID does not exist" containerID="de429aecfe70039d38b79b547591c4297a4c134039354a7e25560d0108aac4db" Jan 22 11:47:52 crc kubenswrapper[4752]: I0122 11:47:52.427506 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de429aecfe70039d38b79b547591c4297a4c134039354a7e25560d0108aac4db"} err="failed to get container status \"de429aecfe70039d38b79b547591c4297a4c134039354a7e25560d0108aac4db\": rpc error: code = NotFound desc = could not find container \"de429aecfe70039d38b79b547591c4297a4c134039354a7e25560d0108aac4db\": container with ID starting with de429aecfe70039d38b79b547591c4297a4c134039354a7e25560d0108aac4db not found: ID does not exist" Jan 22 11:47:53 crc kubenswrapper[4752]: I0122 11:47:53.118744 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d584ad91-e309-4b9b-bb25-d5a6e80e8e9a" path="/var/lib/kubelet/pods/d584ad91-e309-4b9b-bb25-d5a6e80e8e9a/volumes" Jan 22 11:48:57 crc kubenswrapper[4752]: I0122 11:48:57.723516 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:48:57 crc kubenswrapper[4752]: I0122 11:48:57.724247 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:49:27 crc kubenswrapper[4752]: I0122 11:49:27.724507 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:49:27 crc kubenswrapper[4752]: I0122 11:49:27.725385 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:49:47 crc kubenswrapper[4752]: I0122 11:49:47.869956 4752 scope.go:117] "RemoveContainer" containerID="9eb6469b51ae6f75f19e2d0aa06eedf15b815183c4bb4cd08859e8f91dcca115" Jan 22 11:49:47 crc kubenswrapper[4752]: I0122 11:49:47.938170 4752 scope.go:117] "RemoveContainer" containerID="864f17b807c1fcafcb0143e5879f640ba5ee463c803052e031643dd6c1d15e87" Jan 22 11:49:57 crc kubenswrapper[4752]: I0122 11:49:57.723395 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:49:57 crc kubenswrapper[4752]: I0122 11:49:57.724413 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:49:57 crc kubenswrapper[4752]: I0122 11:49:57.724469 4752 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 11:49:57 crc kubenswrapper[4752]: I0122 11:49:57.725475 4752 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7de9cf776e424959bcf40532860d0d02617c023f0ddfc34f688b7ddbe13dc305"} pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 11:49:57 crc kubenswrapper[4752]: I0122 11:49:57.725571 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" containerID="cri-o://7de9cf776e424959bcf40532860d0d02617c023f0ddfc34f688b7ddbe13dc305" gracePeriod=600 Jan 22 11:49:58 crc kubenswrapper[4752]: I0122 11:49:58.632486 4752 generic.go:334] "Generic (PLEG): container finished" podID="eb8df70c-9474-4827-8831-f39fc6883d79" containerID="7de9cf776e424959bcf40532860d0d02617c023f0ddfc34f688b7ddbe13dc305" exitCode=0 Jan 22 11:49:58 crc kubenswrapper[4752]: I0122 11:49:58.632583 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerDied","Data":"7de9cf776e424959bcf40532860d0d02617c023f0ddfc34f688b7ddbe13dc305"} Jan 22 11:49:58 crc kubenswrapper[4752]: I0122 11:49:58.633179 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerStarted","Data":"d023cbf7eb4fc640c62a807be77c92bfa691962080a38d6f4b384116b06160a0"} Jan 22 11:49:58 crc kubenswrapper[4752]: I0122 11:49:58.633221 4752 scope.go:117] "RemoveContainer" containerID="27cdeb1cd11ef1d50e301c3f93a6ba2d19a6b65465fee14030bf079bf16e4924" Jan 22 11:50:47 crc kubenswrapper[4752]: I0122 11:50:47.996785 4752 scope.go:117] "RemoveContainer" containerID="cd8f41a08447d6f8724a275924450116fcbf4194206b243c246bcccde3392755" Jan 22 11:50:50 crc kubenswrapper[4752]: I0122 11:50:50.728337 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sbnvj"] Jan 22 11:50:50 crc kubenswrapper[4752]: E0122 11:50:50.729245 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d584ad91-e309-4b9b-bb25-d5a6e80e8e9a" containerName="extract-content" Jan 22 11:50:50 crc kubenswrapper[4752]: I0122 11:50:50.729260 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="d584ad91-e309-4b9b-bb25-d5a6e80e8e9a" containerName="extract-content" Jan 22 11:50:50 crc kubenswrapper[4752]: E0122 11:50:50.729305 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d584ad91-e309-4b9b-bb25-d5a6e80e8e9a" containerName="registry-server" Jan 22 11:50:50 crc kubenswrapper[4752]: I0122 11:50:50.729313 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="d584ad91-e309-4b9b-bb25-d5a6e80e8e9a" containerName="registry-server" Jan 22 11:50:50 crc kubenswrapper[4752]: E0122 11:50:50.729330 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d584ad91-e309-4b9b-bb25-d5a6e80e8e9a" containerName="extract-utilities" Jan 22 11:50:50 crc kubenswrapper[4752]: I0122 11:50:50.729337 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="d584ad91-e309-4b9b-bb25-d5a6e80e8e9a" containerName="extract-utilities" Jan 22 11:50:50 crc kubenswrapper[4752]: I0122 11:50:50.729585 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="d584ad91-e309-4b9b-bb25-d5a6e80e8e9a" containerName="registry-server" Jan 22 11:50:50 crc kubenswrapper[4752]: I0122 11:50:50.731383 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sbnvj" Jan 22 11:50:50 crc kubenswrapper[4752]: I0122 11:50:50.760154 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sbnvj"] Jan 22 11:50:50 crc kubenswrapper[4752]: I0122 11:50:50.795146 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922-utilities\") pod \"certified-operators-sbnvj\" (UID: \"0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922\") " pod="openshift-marketplace/certified-operators-sbnvj" Jan 22 11:50:50 crc kubenswrapper[4752]: I0122 11:50:50.795379 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922-catalog-content\") pod \"certified-operators-sbnvj\" (UID: \"0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922\") " pod="openshift-marketplace/certified-operators-sbnvj" Jan 22 11:50:50 crc kubenswrapper[4752]: I0122 11:50:50.795654 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82dnr\" (UniqueName: \"kubernetes.io/projected/0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922-kube-api-access-82dnr\") pod \"certified-operators-sbnvj\" (UID: \"0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922\") " pod="openshift-marketplace/certified-operators-sbnvj" Jan 22 11:50:50 crc kubenswrapper[4752]: I0122 11:50:50.899057 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922-utilities\") pod \"certified-operators-sbnvj\" (UID: \"0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922\") " pod="openshift-marketplace/certified-operators-sbnvj" Jan 22 11:50:50 crc kubenswrapper[4752]: I0122 11:50:50.899176 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922-catalog-content\") pod \"certified-operators-sbnvj\" (UID: \"0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922\") " pod="openshift-marketplace/certified-operators-sbnvj" Jan 22 11:50:50 crc kubenswrapper[4752]: I0122 11:50:50.899279 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82dnr\" (UniqueName: \"kubernetes.io/projected/0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922-kube-api-access-82dnr\") pod \"certified-operators-sbnvj\" (UID: \"0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922\") " pod="openshift-marketplace/certified-operators-sbnvj" Jan 22 11:50:50 crc kubenswrapper[4752]: I0122 11:50:50.899624 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922-utilities\") pod \"certified-operators-sbnvj\" (UID: \"0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922\") " pod="openshift-marketplace/certified-operators-sbnvj" Jan 22 11:50:50 crc kubenswrapper[4752]: I0122 11:50:50.899740 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922-catalog-content\") pod \"certified-operators-sbnvj\" (UID: \"0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922\") " pod="openshift-marketplace/certified-operators-sbnvj" Jan 22 11:50:50 crc kubenswrapper[4752]: I0122 11:50:50.918804 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82dnr\" (UniqueName: \"kubernetes.io/projected/0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922-kube-api-access-82dnr\") pod \"certified-operators-sbnvj\" (UID: \"0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922\") " pod="openshift-marketplace/certified-operators-sbnvj" Jan 22 11:50:51 crc kubenswrapper[4752]: I0122 11:50:51.069868 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sbnvj" Jan 22 11:50:51 crc kubenswrapper[4752]: I0122 11:50:51.658226 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sbnvj"] Jan 22 11:50:52 crc kubenswrapper[4752]: I0122 11:50:52.265809 4752 generic.go:334] "Generic (PLEG): container finished" podID="0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922" containerID="bede56ecce60ec48aa3143905468ada9859dcb44e1ab76402b585909ca36f8aa" exitCode=0 Jan 22 11:50:52 crc kubenswrapper[4752]: I0122 11:50:52.265950 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbnvj" event={"ID":"0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922","Type":"ContainerDied","Data":"bede56ecce60ec48aa3143905468ada9859dcb44e1ab76402b585909ca36f8aa"} Jan 22 11:50:52 crc kubenswrapper[4752]: I0122 11:50:52.266198 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbnvj" event={"ID":"0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922","Type":"ContainerStarted","Data":"b5f2c31a7e3b683d238b4325a53e3ff1ce6843ac90258882d35856132d3a1d13"} Jan 22 11:50:52 crc kubenswrapper[4752]: I0122 11:50:52.268122 4752 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 11:50:53 crc kubenswrapper[4752]: I0122 11:50:53.275899 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbnvj" event={"ID":"0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922","Type":"ContainerStarted","Data":"cdc4cfd528a3725afdff80a146997d2d15ca8bb12bb5ac640aeb5a539aebebbb"} Jan 22 11:50:54 crc kubenswrapper[4752]: I0122 11:50:54.288390 4752 generic.go:334] "Generic (PLEG): container finished" podID="0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922" containerID="cdc4cfd528a3725afdff80a146997d2d15ca8bb12bb5ac640aeb5a539aebebbb" exitCode=0 Jan 22 11:50:54 crc kubenswrapper[4752]: I0122 11:50:54.288492 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbnvj" event={"ID":"0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922","Type":"ContainerDied","Data":"cdc4cfd528a3725afdff80a146997d2d15ca8bb12bb5ac640aeb5a539aebebbb"} Jan 22 11:50:55 crc kubenswrapper[4752]: I0122 11:50:55.300166 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbnvj" event={"ID":"0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922","Type":"ContainerStarted","Data":"9ad7941229d742a8d0a3166d3e2233a90df95dbb0d62b837951ee8129277b320"} Jan 22 11:50:55 crc kubenswrapper[4752]: I0122 11:50:55.330573 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sbnvj" podStartSLOduration=2.84669653 podStartE2EDuration="5.330527229s" podCreationTimestamp="2026-01-22 11:50:50 +0000 UTC" firstStartedPulling="2026-01-22 11:50:52.2677062 +0000 UTC m=+5131.497649118" lastFinishedPulling="2026-01-22 11:50:54.751536909 +0000 UTC m=+5133.981479817" observedRunningTime="2026-01-22 11:50:55.319201964 +0000 UTC m=+5134.549144872" watchObservedRunningTime="2026-01-22 11:50:55.330527229 +0000 UTC m=+5134.560470147" Jan 22 11:51:01 crc kubenswrapper[4752]: I0122 11:51:01.070529 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sbnvj" Jan 22 11:51:01 crc kubenswrapper[4752]: I0122 11:51:01.071037 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sbnvj" Jan 22 11:51:01 crc kubenswrapper[4752]: I0122 11:51:01.140676 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sbnvj" Jan 22 11:51:01 crc kubenswrapper[4752]: I0122 11:51:01.421329 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sbnvj" Jan 22 11:51:01 crc kubenswrapper[4752]: I0122 11:51:01.472543 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sbnvj"] Jan 22 11:51:03 crc kubenswrapper[4752]: I0122 11:51:03.379640 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sbnvj" podUID="0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922" containerName="registry-server" containerID="cri-o://9ad7941229d742a8d0a3166d3e2233a90df95dbb0d62b837951ee8129277b320" gracePeriod=2 Jan 22 11:51:03 crc kubenswrapper[4752]: I0122 11:51:03.959237 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sbnvj" Jan 22 11:51:04 crc kubenswrapper[4752]: I0122 11:51:04.080593 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82dnr\" (UniqueName: \"kubernetes.io/projected/0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922-kube-api-access-82dnr\") pod \"0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922\" (UID: \"0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922\") " Jan 22 11:51:04 crc kubenswrapper[4752]: I0122 11:51:04.080770 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922-utilities\") pod \"0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922\" (UID: \"0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922\") " Jan 22 11:51:04 crc kubenswrapper[4752]: I0122 11:51:04.080876 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922-catalog-content\") pod \"0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922\" (UID: \"0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922\") " Jan 22 11:51:04 crc kubenswrapper[4752]: I0122 11:51:04.082812 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922-utilities" (OuterVolumeSpecName: "utilities") pod "0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922" (UID: "0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:51:04 crc kubenswrapper[4752]: I0122 11:51:04.091550 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922-kube-api-access-82dnr" (OuterVolumeSpecName: "kube-api-access-82dnr") pod "0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922" (UID: "0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922"). InnerVolumeSpecName "kube-api-access-82dnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:51:04 crc kubenswrapper[4752]: I0122 11:51:04.129032 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922" (UID: "0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:51:04 crc kubenswrapper[4752]: I0122 11:51:04.183646 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82dnr\" (UniqueName: \"kubernetes.io/projected/0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922-kube-api-access-82dnr\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:04 crc kubenswrapper[4752]: I0122 11:51:04.183686 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:04 crc kubenswrapper[4752]: I0122 11:51:04.183695 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:04 crc kubenswrapper[4752]: I0122 11:51:04.390551 4752 generic.go:334] "Generic (PLEG): container finished" podID="0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922" containerID="9ad7941229d742a8d0a3166d3e2233a90df95dbb0d62b837951ee8129277b320" exitCode=0 Jan 22 11:51:04 crc kubenswrapper[4752]: I0122 11:51:04.390605 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbnvj" event={"ID":"0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922","Type":"ContainerDied","Data":"9ad7941229d742a8d0a3166d3e2233a90df95dbb0d62b837951ee8129277b320"} Jan 22 11:51:04 crc kubenswrapper[4752]: I0122 11:51:04.390657 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbnvj" event={"ID":"0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922","Type":"ContainerDied","Data":"b5f2c31a7e3b683d238b4325a53e3ff1ce6843ac90258882d35856132d3a1d13"} Jan 22 11:51:04 crc kubenswrapper[4752]: I0122 11:51:04.390675 4752 scope.go:117] "RemoveContainer" containerID="9ad7941229d742a8d0a3166d3e2233a90df95dbb0d62b837951ee8129277b320" Jan 22 11:51:04 crc kubenswrapper[4752]: I0122 11:51:04.390671 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sbnvj" Jan 22 11:51:04 crc kubenswrapper[4752]: I0122 11:51:04.418478 4752 scope.go:117] "RemoveContainer" containerID="cdc4cfd528a3725afdff80a146997d2d15ca8bb12bb5ac640aeb5a539aebebbb" Jan 22 11:51:04 crc kubenswrapper[4752]: I0122 11:51:04.443310 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sbnvj"] Jan 22 11:51:04 crc kubenswrapper[4752]: I0122 11:51:04.452486 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sbnvj"] Jan 22 11:51:04 crc kubenswrapper[4752]: I0122 11:51:04.480142 4752 scope.go:117] "RemoveContainer" containerID="bede56ecce60ec48aa3143905468ada9859dcb44e1ab76402b585909ca36f8aa" Jan 22 11:51:04 crc kubenswrapper[4752]: I0122 11:51:04.525736 4752 scope.go:117] "RemoveContainer" containerID="9ad7941229d742a8d0a3166d3e2233a90df95dbb0d62b837951ee8129277b320" Jan 22 11:51:04 crc kubenswrapper[4752]: E0122 11:51:04.527401 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ad7941229d742a8d0a3166d3e2233a90df95dbb0d62b837951ee8129277b320\": container with ID starting with 9ad7941229d742a8d0a3166d3e2233a90df95dbb0d62b837951ee8129277b320 not found: ID does not exist" containerID="9ad7941229d742a8d0a3166d3e2233a90df95dbb0d62b837951ee8129277b320" Jan 22 11:51:04 crc kubenswrapper[4752]: I0122 11:51:04.527458 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ad7941229d742a8d0a3166d3e2233a90df95dbb0d62b837951ee8129277b320"} err="failed to get container status \"9ad7941229d742a8d0a3166d3e2233a90df95dbb0d62b837951ee8129277b320\": rpc error: code = NotFound desc = could not find container \"9ad7941229d742a8d0a3166d3e2233a90df95dbb0d62b837951ee8129277b320\": container with ID starting with 9ad7941229d742a8d0a3166d3e2233a90df95dbb0d62b837951ee8129277b320 not found: ID does not exist" Jan 22 11:51:04 crc kubenswrapper[4752]: I0122 11:51:04.527494 4752 scope.go:117] "RemoveContainer" containerID="cdc4cfd528a3725afdff80a146997d2d15ca8bb12bb5ac640aeb5a539aebebbb" Jan 22 11:51:04 crc kubenswrapper[4752]: E0122 11:51:04.529835 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdc4cfd528a3725afdff80a146997d2d15ca8bb12bb5ac640aeb5a539aebebbb\": container with ID starting with cdc4cfd528a3725afdff80a146997d2d15ca8bb12bb5ac640aeb5a539aebebbb not found: ID does not exist" containerID="cdc4cfd528a3725afdff80a146997d2d15ca8bb12bb5ac640aeb5a539aebebbb" Jan 22 11:51:04 crc kubenswrapper[4752]: I0122 11:51:04.529930 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdc4cfd528a3725afdff80a146997d2d15ca8bb12bb5ac640aeb5a539aebebbb"} err="failed to get container status \"cdc4cfd528a3725afdff80a146997d2d15ca8bb12bb5ac640aeb5a539aebebbb\": rpc error: code = NotFound desc = could not find container \"cdc4cfd528a3725afdff80a146997d2d15ca8bb12bb5ac640aeb5a539aebebbb\": container with ID starting with cdc4cfd528a3725afdff80a146997d2d15ca8bb12bb5ac640aeb5a539aebebbb not found: ID does not exist" Jan 22 11:51:04 crc kubenswrapper[4752]: I0122 11:51:04.529973 4752 scope.go:117] "RemoveContainer" containerID="bede56ecce60ec48aa3143905468ada9859dcb44e1ab76402b585909ca36f8aa" Jan 22 11:51:04 crc kubenswrapper[4752]: E0122 11:51:04.534956 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bede56ecce60ec48aa3143905468ada9859dcb44e1ab76402b585909ca36f8aa\": container with ID starting with bede56ecce60ec48aa3143905468ada9859dcb44e1ab76402b585909ca36f8aa not found: ID does not exist" containerID="bede56ecce60ec48aa3143905468ada9859dcb44e1ab76402b585909ca36f8aa" Jan 22 11:51:04 crc kubenswrapper[4752]: I0122 11:51:04.534992 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bede56ecce60ec48aa3143905468ada9859dcb44e1ab76402b585909ca36f8aa"} err="failed to get container status \"bede56ecce60ec48aa3143905468ada9859dcb44e1ab76402b585909ca36f8aa\": rpc error: code = NotFound desc = could not find container \"bede56ecce60ec48aa3143905468ada9859dcb44e1ab76402b585909ca36f8aa\": container with ID starting with bede56ecce60ec48aa3143905468ada9859dcb44e1ab76402b585909ca36f8aa not found: ID does not exist" Jan 22 11:51:05 crc kubenswrapper[4752]: I0122 11:51:05.110338 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922" path="/var/lib/kubelet/pods/0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922/volumes" Jan 22 11:52:27 crc kubenswrapper[4752]: I0122 11:52:27.723805 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:52:27 crc kubenswrapper[4752]: I0122 11:52:27.724653 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:52:57 crc kubenswrapper[4752]: I0122 11:52:57.724397 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:52:57 crc kubenswrapper[4752]: I0122 11:52:57.725293 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:53:27 crc kubenswrapper[4752]: I0122 11:53:27.724224 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:53:27 crc kubenswrapper[4752]: I0122 11:53:27.724821 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:53:27 crc kubenswrapper[4752]: I0122 11:53:27.724915 4752 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 11:53:27 crc kubenswrapper[4752]: I0122 11:53:27.725970 4752 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d023cbf7eb4fc640c62a807be77c92bfa691962080a38d6f4b384116b06160a0"} pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 11:53:27 crc kubenswrapper[4752]: I0122 11:53:27.726031 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" containerID="cri-o://d023cbf7eb4fc640c62a807be77c92bfa691962080a38d6f4b384116b06160a0" gracePeriod=600 Jan 22 11:53:27 crc kubenswrapper[4752]: E0122 11:53:27.858482 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:53:27 crc kubenswrapper[4752]: I0122 11:53:27.907008 4752 generic.go:334] "Generic (PLEG): container finished" podID="eb8df70c-9474-4827-8831-f39fc6883d79" containerID="d023cbf7eb4fc640c62a807be77c92bfa691962080a38d6f4b384116b06160a0" exitCode=0 Jan 22 11:53:27 crc kubenswrapper[4752]: I0122 11:53:27.907055 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerDied","Data":"d023cbf7eb4fc640c62a807be77c92bfa691962080a38d6f4b384116b06160a0"} Jan 22 11:53:27 crc kubenswrapper[4752]: I0122 11:53:27.907090 4752 scope.go:117] "RemoveContainer" containerID="7de9cf776e424959bcf40532860d0d02617c023f0ddfc34f688b7ddbe13dc305" Jan 22 11:53:27 crc kubenswrapper[4752]: I0122 11:53:27.907793 4752 scope.go:117] "RemoveContainer" containerID="d023cbf7eb4fc640c62a807be77c92bfa691962080a38d6f4b384116b06160a0" Jan 22 11:53:27 crc kubenswrapper[4752]: E0122 11:53:27.908099 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:53:41 crc kubenswrapper[4752]: I0122 11:53:41.107408 4752 scope.go:117] "RemoveContainer" containerID="d023cbf7eb4fc640c62a807be77c92bfa691962080a38d6f4b384116b06160a0" Jan 22 11:53:41 crc kubenswrapper[4752]: E0122 11:53:41.108426 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:53:55 crc kubenswrapper[4752]: I0122 11:53:55.098563 4752 scope.go:117] "RemoveContainer" containerID="d023cbf7eb4fc640c62a807be77c92bfa691962080a38d6f4b384116b06160a0" Jan 22 11:53:55 crc kubenswrapper[4752]: E0122 11:53:55.100016 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:54:09 crc kubenswrapper[4752]: I0122 11:54:09.098760 4752 scope.go:117] "RemoveContainer" containerID="d023cbf7eb4fc640c62a807be77c92bfa691962080a38d6f4b384116b06160a0" Jan 22 11:54:09 crc kubenswrapper[4752]: E0122 11:54:09.099520 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:54:13 crc kubenswrapper[4752]: I0122 11:54:13.054996 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b72m4"] Jan 22 11:54:13 crc kubenswrapper[4752]: E0122 11:54:13.056783 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922" containerName="extract-utilities" Jan 22 11:54:13 crc kubenswrapper[4752]: I0122 11:54:13.056836 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922" containerName="extract-utilities" Jan 22 11:54:13 crc kubenswrapper[4752]: E0122 11:54:13.056914 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922" containerName="registry-server" Jan 22 11:54:13 crc kubenswrapper[4752]: I0122 11:54:13.056931 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922" containerName="registry-server" Jan 22 11:54:13 crc kubenswrapper[4752]: E0122 11:54:13.056953 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922" containerName="extract-content" Jan 22 11:54:13 crc kubenswrapper[4752]: I0122 11:54:13.056962 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922" containerName="extract-content" Jan 22 11:54:13 crc kubenswrapper[4752]: I0122 11:54:13.058065 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b8f7ebd-b6b4-41ea-9def-ae1b9ea74922" containerName="registry-server" Jan 22 11:54:13 crc kubenswrapper[4752]: I0122 11:54:13.062770 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b72m4" Jan 22 11:54:13 crc kubenswrapper[4752]: I0122 11:54:13.082997 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b72m4"] Jan 22 11:54:13 crc kubenswrapper[4752]: I0122 11:54:13.112158 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e618fc29-c604-4617-b3b4-56af59d7e90c-catalog-content\") pod \"redhat-operators-b72m4\" (UID: \"e618fc29-c604-4617-b3b4-56af59d7e90c\") " pod="openshift-marketplace/redhat-operators-b72m4" Jan 22 11:54:13 crc kubenswrapper[4752]: I0122 11:54:13.112223 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e618fc29-c604-4617-b3b4-56af59d7e90c-utilities\") pod \"redhat-operators-b72m4\" (UID: \"e618fc29-c604-4617-b3b4-56af59d7e90c\") " pod="openshift-marketplace/redhat-operators-b72m4" Jan 22 11:54:13 crc kubenswrapper[4752]: I0122 11:54:13.112251 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cq74\" (UniqueName: \"kubernetes.io/projected/e618fc29-c604-4617-b3b4-56af59d7e90c-kube-api-access-9cq74\") pod \"redhat-operators-b72m4\" (UID: \"e618fc29-c604-4617-b3b4-56af59d7e90c\") " pod="openshift-marketplace/redhat-operators-b72m4" Jan 22 11:54:13 crc kubenswrapper[4752]: I0122 11:54:13.214382 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e618fc29-c604-4617-b3b4-56af59d7e90c-catalog-content\") pod \"redhat-operators-b72m4\" (UID: \"e618fc29-c604-4617-b3b4-56af59d7e90c\") " pod="openshift-marketplace/redhat-operators-b72m4" Jan 22 11:54:13 crc kubenswrapper[4752]: I0122 11:54:13.214491 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e618fc29-c604-4617-b3b4-56af59d7e90c-utilities\") pod \"redhat-operators-b72m4\" (UID: \"e618fc29-c604-4617-b3b4-56af59d7e90c\") " pod="openshift-marketplace/redhat-operators-b72m4" Jan 22 11:54:13 crc kubenswrapper[4752]: I0122 11:54:13.214531 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cq74\" (UniqueName: \"kubernetes.io/projected/e618fc29-c604-4617-b3b4-56af59d7e90c-kube-api-access-9cq74\") pod \"redhat-operators-b72m4\" (UID: \"e618fc29-c604-4617-b3b4-56af59d7e90c\") " pod="openshift-marketplace/redhat-operators-b72m4" Jan 22 11:54:13 crc kubenswrapper[4752]: I0122 11:54:13.216219 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e618fc29-c604-4617-b3b4-56af59d7e90c-utilities\") pod \"redhat-operators-b72m4\" (UID: \"e618fc29-c604-4617-b3b4-56af59d7e90c\") " pod="openshift-marketplace/redhat-operators-b72m4" Jan 22 11:54:13 crc kubenswrapper[4752]: I0122 11:54:13.216217 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e618fc29-c604-4617-b3b4-56af59d7e90c-catalog-content\") pod \"redhat-operators-b72m4\" (UID: \"e618fc29-c604-4617-b3b4-56af59d7e90c\") " pod="openshift-marketplace/redhat-operators-b72m4" Jan 22 11:54:13 crc kubenswrapper[4752]: I0122 11:54:13.248513 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cq74\" (UniqueName: \"kubernetes.io/projected/e618fc29-c604-4617-b3b4-56af59d7e90c-kube-api-access-9cq74\") pod \"redhat-operators-b72m4\" (UID: \"e618fc29-c604-4617-b3b4-56af59d7e90c\") " pod="openshift-marketplace/redhat-operators-b72m4" Jan 22 11:54:13 crc kubenswrapper[4752]: I0122 11:54:13.392462 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b72m4" Jan 22 11:54:13 crc kubenswrapper[4752]: I0122 11:54:13.887122 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b72m4"] Jan 22 11:54:14 crc kubenswrapper[4752]: I0122 11:54:14.416445 4752 generic.go:334] "Generic (PLEG): container finished" podID="e618fc29-c604-4617-b3b4-56af59d7e90c" containerID="eb4a229b57c8909e63e41f3b85c2e142bd12ae8a136707f9746e9308840af392" exitCode=0 Jan 22 11:54:14 crc kubenswrapper[4752]: I0122 11:54:14.416570 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b72m4" event={"ID":"e618fc29-c604-4617-b3b4-56af59d7e90c","Type":"ContainerDied","Data":"eb4a229b57c8909e63e41f3b85c2e142bd12ae8a136707f9746e9308840af392"} Jan 22 11:54:14 crc kubenswrapper[4752]: I0122 11:54:14.416950 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b72m4" event={"ID":"e618fc29-c604-4617-b3b4-56af59d7e90c","Type":"ContainerStarted","Data":"559cf3c79925a84c44f45af314106333d945599aa87065162b408eff5a8366f3"} Jan 22 11:54:16 crc kubenswrapper[4752]: I0122 11:54:16.439195 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b72m4" event={"ID":"e618fc29-c604-4617-b3b4-56af59d7e90c","Type":"ContainerStarted","Data":"d3485be320e123672312a1464fc0e7d3c861a2720e137a8da81f20289ee3dfe4"} Jan 22 11:54:18 crc kubenswrapper[4752]: I0122 11:54:18.461220 4752 generic.go:334] "Generic (PLEG): container finished" podID="e618fc29-c604-4617-b3b4-56af59d7e90c" containerID="d3485be320e123672312a1464fc0e7d3c861a2720e137a8da81f20289ee3dfe4" exitCode=0 Jan 22 11:54:18 crc kubenswrapper[4752]: I0122 11:54:18.461270 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b72m4" event={"ID":"e618fc29-c604-4617-b3b4-56af59d7e90c","Type":"ContainerDied","Data":"d3485be320e123672312a1464fc0e7d3c861a2720e137a8da81f20289ee3dfe4"} Jan 22 11:54:19 crc kubenswrapper[4752]: I0122 11:54:19.473290 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b72m4" event={"ID":"e618fc29-c604-4617-b3b4-56af59d7e90c","Type":"ContainerStarted","Data":"a88a036576daf207ea96c4eb9534b5497430e760a8dbd11ac4dbc66b7a673dc8"} Jan 22 11:54:19 crc kubenswrapper[4752]: I0122 11:54:19.501936 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b72m4" podStartSLOduration=1.973893904 podStartE2EDuration="6.501900476s" podCreationTimestamp="2026-01-22 11:54:13 +0000 UTC" firstStartedPulling="2026-01-22 11:54:14.418313084 +0000 UTC m=+5333.648255992" lastFinishedPulling="2026-01-22 11:54:18.946319656 +0000 UTC m=+5338.176262564" observedRunningTime="2026-01-22 11:54:19.496973637 +0000 UTC m=+5338.726916585" watchObservedRunningTime="2026-01-22 11:54:19.501900476 +0000 UTC m=+5338.731843434" Jan 22 11:54:20 crc kubenswrapper[4752]: I0122 11:54:20.099297 4752 scope.go:117] "RemoveContainer" containerID="d023cbf7eb4fc640c62a807be77c92bfa691962080a38d6f4b384116b06160a0" Jan 22 11:54:20 crc kubenswrapper[4752]: E0122 11:54:20.099603 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:54:23 crc kubenswrapper[4752]: I0122 11:54:23.392984 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b72m4" Jan 22 11:54:23 crc kubenswrapper[4752]: I0122 11:54:23.393575 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b72m4" Jan 22 11:54:24 crc kubenswrapper[4752]: I0122 11:54:24.599377 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b72m4" podUID="e618fc29-c604-4617-b3b4-56af59d7e90c" containerName="registry-server" probeResult="failure" output=< Jan 22 11:54:24 crc kubenswrapper[4752]: timeout: failed to connect service ":50051" within 1s Jan 22 11:54:24 crc kubenswrapper[4752]: > Jan 22 11:54:33 crc kubenswrapper[4752]: I0122 11:54:33.443156 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b72m4" Jan 22 11:54:33 crc kubenswrapper[4752]: I0122 11:54:33.505106 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b72m4" Jan 22 11:54:33 crc kubenswrapper[4752]: I0122 11:54:33.681234 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b72m4"] Jan 22 11:54:34 crc kubenswrapper[4752]: I0122 11:54:34.626989 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b72m4" podUID="e618fc29-c604-4617-b3b4-56af59d7e90c" containerName="registry-server" containerID="cri-o://a88a036576daf207ea96c4eb9534b5497430e760a8dbd11ac4dbc66b7a673dc8" gracePeriod=2 Jan 22 11:54:35 crc kubenswrapper[4752]: I0122 11:54:35.097892 4752 scope.go:117] "RemoveContainer" containerID="d023cbf7eb4fc640c62a807be77c92bfa691962080a38d6f4b384116b06160a0" Jan 22 11:54:35 crc kubenswrapper[4752]: E0122 11:54:35.098721 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:54:35 crc kubenswrapper[4752]: I0122 11:54:35.099045 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b72m4" Jan 22 11:54:35 crc kubenswrapper[4752]: I0122 11:54:35.218979 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e618fc29-c604-4617-b3b4-56af59d7e90c-catalog-content\") pod \"e618fc29-c604-4617-b3b4-56af59d7e90c\" (UID: \"e618fc29-c604-4617-b3b4-56af59d7e90c\") " Jan 22 11:54:35 crc kubenswrapper[4752]: I0122 11:54:35.219080 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cq74\" (UniqueName: \"kubernetes.io/projected/e618fc29-c604-4617-b3b4-56af59d7e90c-kube-api-access-9cq74\") pod \"e618fc29-c604-4617-b3b4-56af59d7e90c\" (UID: \"e618fc29-c604-4617-b3b4-56af59d7e90c\") " Jan 22 11:54:35 crc kubenswrapper[4752]: I0122 11:54:35.219113 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e618fc29-c604-4617-b3b4-56af59d7e90c-utilities\") pod \"e618fc29-c604-4617-b3b4-56af59d7e90c\" (UID: \"e618fc29-c604-4617-b3b4-56af59d7e90c\") " Jan 22 11:54:35 crc kubenswrapper[4752]: I0122 11:54:35.220350 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e618fc29-c604-4617-b3b4-56af59d7e90c-utilities" (OuterVolumeSpecName: "utilities") pod "e618fc29-c604-4617-b3b4-56af59d7e90c" (UID: "e618fc29-c604-4617-b3b4-56af59d7e90c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:54:35 crc kubenswrapper[4752]: I0122 11:54:35.227770 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e618fc29-c604-4617-b3b4-56af59d7e90c-kube-api-access-9cq74" (OuterVolumeSpecName: "kube-api-access-9cq74") pod "e618fc29-c604-4617-b3b4-56af59d7e90c" (UID: "e618fc29-c604-4617-b3b4-56af59d7e90c"). InnerVolumeSpecName "kube-api-access-9cq74". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:54:35 crc kubenswrapper[4752]: I0122 11:54:35.322315 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cq74\" (UniqueName: \"kubernetes.io/projected/e618fc29-c604-4617-b3b4-56af59d7e90c-kube-api-access-9cq74\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:35 crc kubenswrapper[4752]: I0122 11:54:35.322348 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e618fc29-c604-4617-b3b4-56af59d7e90c-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:35 crc kubenswrapper[4752]: I0122 11:54:35.382232 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e618fc29-c604-4617-b3b4-56af59d7e90c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e618fc29-c604-4617-b3b4-56af59d7e90c" (UID: "e618fc29-c604-4617-b3b4-56af59d7e90c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:54:35 crc kubenswrapper[4752]: I0122 11:54:35.423996 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e618fc29-c604-4617-b3b4-56af59d7e90c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:35 crc kubenswrapper[4752]: I0122 11:54:35.638484 4752 generic.go:334] "Generic (PLEG): container finished" podID="e618fc29-c604-4617-b3b4-56af59d7e90c" containerID="a88a036576daf207ea96c4eb9534b5497430e760a8dbd11ac4dbc66b7a673dc8" exitCode=0 Jan 22 11:54:35 crc kubenswrapper[4752]: I0122 11:54:35.638546 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b72m4" Jan 22 11:54:35 crc kubenswrapper[4752]: I0122 11:54:35.638581 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b72m4" event={"ID":"e618fc29-c604-4617-b3b4-56af59d7e90c","Type":"ContainerDied","Data":"a88a036576daf207ea96c4eb9534b5497430e760a8dbd11ac4dbc66b7a673dc8"} Jan 22 11:54:35 crc kubenswrapper[4752]: I0122 11:54:35.639218 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b72m4" event={"ID":"e618fc29-c604-4617-b3b4-56af59d7e90c","Type":"ContainerDied","Data":"559cf3c79925a84c44f45af314106333d945599aa87065162b408eff5a8366f3"} Jan 22 11:54:35 crc kubenswrapper[4752]: I0122 11:54:35.639252 4752 scope.go:117] "RemoveContainer" containerID="a88a036576daf207ea96c4eb9534b5497430e760a8dbd11ac4dbc66b7a673dc8" Jan 22 11:54:35 crc kubenswrapper[4752]: I0122 11:54:35.671612 4752 scope.go:117] "RemoveContainer" containerID="d3485be320e123672312a1464fc0e7d3c861a2720e137a8da81f20289ee3dfe4" Jan 22 11:54:35 crc kubenswrapper[4752]: I0122 11:54:35.677846 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b72m4"] Jan 22 11:54:35 crc kubenswrapper[4752]: I0122 11:54:35.687107 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b72m4"] Jan 22 11:54:35 crc kubenswrapper[4752]: I0122 11:54:35.708939 4752 scope.go:117] "RemoveContainer" containerID="eb4a229b57c8909e63e41f3b85c2e142bd12ae8a136707f9746e9308840af392" Jan 22 11:54:35 crc kubenswrapper[4752]: I0122 11:54:35.753126 4752 scope.go:117] "RemoveContainer" containerID="a88a036576daf207ea96c4eb9534b5497430e760a8dbd11ac4dbc66b7a673dc8" Jan 22 11:54:35 crc kubenswrapper[4752]: E0122 11:54:35.753649 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a88a036576daf207ea96c4eb9534b5497430e760a8dbd11ac4dbc66b7a673dc8\": container with ID starting with a88a036576daf207ea96c4eb9534b5497430e760a8dbd11ac4dbc66b7a673dc8 not found: ID does not exist" containerID="a88a036576daf207ea96c4eb9534b5497430e760a8dbd11ac4dbc66b7a673dc8" Jan 22 11:54:35 crc kubenswrapper[4752]: I0122 11:54:35.753732 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a88a036576daf207ea96c4eb9534b5497430e760a8dbd11ac4dbc66b7a673dc8"} err="failed to get container status \"a88a036576daf207ea96c4eb9534b5497430e760a8dbd11ac4dbc66b7a673dc8\": rpc error: code = NotFound desc = could not find container \"a88a036576daf207ea96c4eb9534b5497430e760a8dbd11ac4dbc66b7a673dc8\": container with ID starting with a88a036576daf207ea96c4eb9534b5497430e760a8dbd11ac4dbc66b7a673dc8 not found: ID does not exist" Jan 22 11:54:35 crc kubenswrapper[4752]: I0122 11:54:35.753802 4752 scope.go:117] "RemoveContainer" containerID="d3485be320e123672312a1464fc0e7d3c861a2720e137a8da81f20289ee3dfe4" Jan 22 11:54:35 crc kubenswrapper[4752]: E0122 11:54:35.754166 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3485be320e123672312a1464fc0e7d3c861a2720e137a8da81f20289ee3dfe4\": container with ID starting with d3485be320e123672312a1464fc0e7d3c861a2720e137a8da81f20289ee3dfe4 not found: ID does not exist" containerID="d3485be320e123672312a1464fc0e7d3c861a2720e137a8da81f20289ee3dfe4" Jan 22 11:54:35 crc kubenswrapper[4752]: I0122 11:54:35.754237 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3485be320e123672312a1464fc0e7d3c861a2720e137a8da81f20289ee3dfe4"} err="failed to get container status \"d3485be320e123672312a1464fc0e7d3c861a2720e137a8da81f20289ee3dfe4\": rpc error: code = NotFound desc = could not find container \"d3485be320e123672312a1464fc0e7d3c861a2720e137a8da81f20289ee3dfe4\": container with ID starting with d3485be320e123672312a1464fc0e7d3c861a2720e137a8da81f20289ee3dfe4 not found: ID does not exist" Jan 22 11:54:35 crc kubenswrapper[4752]: I0122 11:54:35.754310 4752 scope.go:117] "RemoveContainer" containerID="eb4a229b57c8909e63e41f3b85c2e142bd12ae8a136707f9746e9308840af392" Jan 22 11:54:35 crc kubenswrapper[4752]: E0122 11:54:35.754781 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb4a229b57c8909e63e41f3b85c2e142bd12ae8a136707f9746e9308840af392\": container with ID starting with eb4a229b57c8909e63e41f3b85c2e142bd12ae8a136707f9746e9308840af392 not found: ID does not exist" containerID="eb4a229b57c8909e63e41f3b85c2e142bd12ae8a136707f9746e9308840af392" Jan 22 11:54:35 crc kubenswrapper[4752]: I0122 11:54:35.754834 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb4a229b57c8909e63e41f3b85c2e142bd12ae8a136707f9746e9308840af392"} err="failed to get container status \"eb4a229b57c8909e63e41f3b85c2e142bd12ae8a136707f9746e9308840af392\": rpc error: code = NotFound desc = could not find container \"eb4a229b57c8909e63e41f3b85c2e142bd12ae8a136707f9746e9308840af392\": container with ID starting with eb4a229b57c8909e63e41f3b85c2e142bd12ae8a136707f9746e9308840af392 not found: ID does not exist" Jan 22 11:54:37 crc kubenswrapper[4752]: I0122 11:54:37.114254 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e618fc29-c604-4617-b3b4-56af59d7e90c" path="/var/lib/kubelet/pods/e618fc29-c604-4617-b3b4-56af59d7e90c/volumes" Jan 22 11:54:46 crc kubenswrapper[4752]: I0122 11:54:46.099582 4752 scope.go:117] "RemoveContainer" containerID="d023cbf7eb4fc640c62a807be77c92bfa691962080a38d6f4b384116b06160a0" Jan 22 11:54:46 crc kubenswrapper[4752]: E0122 11:54:46.100411 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:54:58 crc kubenswrapper[4752]: I0122 11:54:58.099021 4752 scope.go:117] "RemoveContainer" containerID="d023cbf7eb4fc640c62a807be77c92bfa691962080a38d6f4b384116b06160a0" Jan 22 11:54:58 crc kubenswrapper[4752]: E0122 11:54:58.099876 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:54:59 crc kubenswrapper[4752]: I0122 11:54:59.580653 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7bbdg"] Jan 22 11:54:59 crc kubenswrapper[4752]: E0122 11:54:59.581449 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e618fc29-c604-4617-b3b4-56af59d7e90c" containerName="extract-content" Jan 22 11:54:59 crc kubenswrapper[4752]: I0122 11:54:59.581465 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="e618fc29-c604-4617-b3b4-56af59d7e90c" containerName="extract-content" Jan 22 11:54:59 crc kubenswrapper[4752]: E0122 11:54:59.581493 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e618fc29-c604-4617-b3b4-56af59d7e90c" containerName="extract-utilities" Jan 22 11:54:59 crc kubenswrapper[4752]: I0122 11:54:59.581501 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="e618fc29-c604-4617-b3b4-56af59d7e90c" containerName="extract-utilities" Jan 22 11:54:59 crc kubenswrapper[4752]: E0122 11:54:59.581540 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e618fc29-c604-4617-b3b4-56af59d7e90c" containerName="registry-server" Jan 22 11:54:59 crc kubenswrapper[4752]: I0122 11:54:59.581546 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="e618fc29-c604-4617-b3b4-56af59d7e90c" containerName="registry-server" Jan 22 11:54:59 crc kubenswrapper[4752]: I0122 11:54:59.581785 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="e618fc29-c604-4617-b3b4-56af59d7e90c" containerName="registry-server" Jan 22 11:54:59 crc kubenswrapper[4752]: I0122 11:54:59.583762 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7bbdg" Jan 22 11:54:59 crc kubenswrapper[4752]: I0122 11:54:59.595842 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7bbdg"] Jan 22 11:54:59 crc kubenswrapper[4752]: I0122 11:54:59.777930 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56e6fc5e-c627-4cd3-a517-5f124cac7991-utilities\") pod \"community-operators-7bbdg\" (UID: \"56e6fc5e-c627-4cd3-a517-5f124cac7991\") " pod="openshift-marketplace/community-operators-7bbdg" Jan 22 11:54:59 crc kubenswrapper[4752]: I0122 11:54:59.778248 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56e6fc5e-c627-4cd3-a517-5f124cac7991-catalog-content\") pod \"community-operators-7bbdg\" (UID: \"56e6fc5e-c627-4cd3-a517-5f124cac7991\") " pod="openshift-marketplace/community-operators-7bbdg" Jan 22 11:54:59 crc kubenswrapper[4752]: I0122 11:54:59.778335 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg5vt\" (UniqueName: \"kubernetes.io/projected/56e6fc5e-c627-4cd3-a517-5f124cac7991-kube-api-access-xg5vt\") pod \"community-operators-7bbdg\" (UID: \"56e6fc5e-c627-4cd3-a517-5f124cac7991\") " pod="openshift-marketplace/community-operators-7bbdg" Jan 22 11:54:59 crc kubenswrapper[4752]: I0122 11:54:59.881339 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56e6fc5e-c627-4cd3-a517-5f124cac7991-utilities\") pod \"community-operators-7bbdg\" (UID: \"56e6fc5e-c627-4cd3-a517-5f124cac7991\") " pod="openshift-marketplace/community-operators-7bbdg" Jan 22 11:54:59 crc kubenswrapper[4752]: I0122 11:54:59.881476 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56e6fc5e-c627-4cd3-a517-5f124cac7991-catalog-content\") pod \"community-operators-7bbdg\" (UID: \"56e6fc5e-c627-4cd3-a517-5f124cac7991\") " pod="openshift-marketplace/community-operators-7bbdg" Jan 22 11:54:59 crc kubenswrapper[4752]: I0122 11:54:59.881502 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg5vt\" (UniqueName: \"kubernetes.io/projected/56e6fc5e-c627-4cd3-a517-5f124cac7991-kube-api-access-xg5vt\") pod \"community-operators-7bbdg\" (UID: \"56e6fc5e-c627-4cd3-a517-5f124cac7991\") " pod="openshift-marketplace/community-operators-7bbdg" Jan 22 11:54:59 crc kubenswrapper[4752]: I0122 11:54:59.881992 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56e6fc5e-c627-4cd3-a517-5f124cac7991-catalog-content\") pod \"community-operators-7bbdg\" (UID: \"56e6fc5e-c627-4cd3-a517-5f124cac7991\") " pod="openshift-marketplace/community-operators-7bbdg" Jan 22 11:54:59 crc kubenswrapper[4752]: I0122 11:54:59.882015 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56e6fc5e-c627-4cd3-a517-5f124cac7991-utilities\") pod \"community-operators-7bbdg\" (UID: \"56e6fc5e-c627-4cd3-a517-5f124cac7991\") " pod="openshift-marketplace/community-operators-7bbdg" Jan 22 11:54:59 crc kubenswrapper[4752]: I0122 11:54:59.908040 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg5vt\" (UniqueName: \"kubernetes.io/projected/56e6fc5e-c627-4cd3-a517-5f124cac7991-kube-api-access-xg5vt\") pod \"community-operators-7bbdg\" (UID: \"56e6fc5e-c627-4cd3-a517-5f124cac7991\") " pod="openshift-marketplace/community-operators-7bbdg" Jan 22 11:55:00 crc kubenswrapper[4752]: I0122 11:55:00.205961 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7bbdg" Jan 22 11:55:00 crc kubenswrapper[4752]: I0122 11:55:00.663278 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7bbdg"] Jan 22 11:55:00 crc kubenswrapper[4752]: I0122 11:55:00.932848 4752 generic.go:334] "Generic (PLEG): container finished" podID="56e6fc5e-c627-4cd3-a517-5f124cac7991" containerID="674416fd0fab68e6917ab16a02c450f58f5d82b718a677e65d38ba6c9f167315" exitCode=0 Jan 22 11:55:00 crc kubenswrapper[4752]: I0122 11:55:00.932936 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7bbdg" event={"ID":"56e6fc5e-c627-4cd3-a517-5f124cac7991","Type":"ContainerDied","Data":"674416fd0fab68e6917ab16a02c450f58f5d82b718a677e65d38ba6c9f167315"} Jan 22 11:55:00 crc kubenswrapper[4752]: I0122 11:55:00.932960 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7bbdg" event={"ID":"56e6fc5e-c627-4cd3-a517-5f124cac7991","Type":"ContainerStarted","Data":"80148b32b5612af489336983088121491493a23017cbd3d193a4709d9d0fe35a"} Jan 22 11:55:02 crc kubenswrapper[4752]: I0122 11:55:02.952400 4752 generic.go:334] "Generic (PLEG): container finished" podID="56e6fc5e-c627-4cd3-a517-5f124cac7991" containerID="bd0f9b5cb0ab562dcb1a295446b312a2aa8fbb9af41e74fd0677e3ccb0497120" exitCode=0 Jan 22 11:55:02 crc kubenswrapper[4752]: I0122 11:55:02.952517 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7bbdg" event={"ID":"56e6fc5e-c627-4cd3-a517-5f124cac7991","Type":"ContainerDied","Data":"bd0f9b5cb0ab562dcb1a295446b312a2aa8fbb9af41e74fd0677e3ccb0497120"} Jan 22 11:55:03 crc kubenswrapper[4752]: E0122 11:55:03.048811 4752 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56e6fc5e_c627_4cd3_a517_5f124cac7991.slice/crio-conmon-bd0f9b5cb0ab562dcb1a295446b312a2aa8fbb9af41e74fd0677e3ccb0497120.scope\": RecentStats: unable to find data in memory cache]" Jan 22 11:55:03 crc kubenswrapper[4752]: I0122 11:55:03.964216 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7bbdg" event={"ID":"56e6fc5e-c627-4cd3-a517-5f124cac7991","Type":"ContainerStarted","Data":"589b849cd7fc1d88d306580c579142ca62762bfebc3bd3c9c9b10ce6a827249d"} Jan 22 11:55:03 crc kubenswrapper[4752]: I0122 11:55:03.984811 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7bbdg" podStartSLOduration=2.296605953 podStartE2EDuration="4.98478647s" podCreationTimestamp="2026-01-22 11:54:59 +0000 UTC" firstStartedPulling="2026-01-22 11:55:00.935272939 +0000 UTC m=+5380.165215847" lastFinishedPulling="2026-01-22 11:55:03.623453456 +0000 UTC m=+5382.853396364" observedRunningTime="2026-01-22 11:55:03.979705588 +0000 UTC m=+5383.209648556" watchObservedRunningTime="2026-01-22 11:55:03.98478647 +0000 UTC m=+5383.214729388" Jan 22 11:55:10 crc kubenswrapper[4752]: I0122 11:55:10.097252 4752 scope.go:117] "RemoveContainer" containerID="d023cbf7eb4fc640c62a807be77c92bfa691962080a38d6f4b384116b06160a0" Jan 22 11:55:10 crc kubenswrapper[4752]: E0122 11:55:10.097966 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:55:10 crc kubenswrapper[4752]: I0122 11:55:10.207298 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7bbdg" Jan 22 11:55:10 crc kubenswrapper[4752]: I0122 11:55:10.207357 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7bbdg" Jan 22 11:55:10 crc kubenswrapper[4752]: I0122 11:55:10.312289 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7bbdg" Jan 22 11:55:11 crc kubenswrapper[4752]: I0122 11:55:11.698457 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7bbdg" Jan 22 11:55:11 crc kubenswrapper[4752]: I0122 11:55:11.750881 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7bbdg"] Jan 22 11:55:13 crc kubenswrapper[4752]: I0122 11:55:13.056495 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7bbdg" podUID="56e6fc5e-c627-4cd3-a517-5f124cac7991" containerName="registry-server" containerID="cri-o://589b849cd7fc1d88d306580c579142ca62762bfebc3bd3c9c9b10ce6a827249d" gracePeriod=2 Jan 22 11:55:14 crc kubenswrapper[4752]: I0122 11:55:14.068956 4752 generic.go:334] "Generic (PLEG): container finished" podID="56e6fc5e-c627-4cd3-a517-5f124cac7991" containerID="589b849cd7fc1d88d306580c579142ca62762bfebc3bd3c9c9b10ce6a827249d" exitCode=0 Jan 22 11:55:14 crc kubenswrapper[4752]: I0122 11:55:14.069177 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7bbdg" event={"ID":"56e6fc5e-c627-4cd3-a517-5f124cac7991","Type":"ContainerDied","Data":"589b849cd7fc1d88d306580c579142ca62762bfebc3bd3c9c9b10ce6a827249d"} Jan 22 11:55:14 crc kubenswrapper[4752]: I0122 11:55:14.481029 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7bbdg" Jan 22 11:55:14 crc kubenswrapper[4752]: I0122 11:55:14.549540 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56e6fc5e-c627-4cd3-a517-5f124cac7991-catalog-content\") pod \"56e6fc5e-c627-4cd3-a517-5f124cac7991\" (UID: \"56e6fc5e-c627-4cd3-a517-5f124cac7991\") " Jan 22 11:55:14 crc kubenswrapper[4752]: I0122 11:55:14.549608 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg5vt\" (UniqueName: \"kubernetes.io/projected/56e6fc5e-c627-4cd3-a517-5f124cac7991-kube-api-access-xg5vt\") pod \"56e6fc5e-c627-4cd3-a517-5f124cac7991\" (UID: \"56e6fc5e-c627-4cd3-a517-5f124cac7991\") " Jan 22 11:55:14 crc kubenswrapper[4752]: I0122 11:55:14.549751 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56e6fc5e-c627-4cd3-a517-5f124cac7991-utilities\") pod \"56e6fc5e-c627-4cd3-a517-5f124cac7991\" (UID: \"56e6fc5e-c627-4cd3-a517-5f124cac7991\") " Jan 22 11:55:14 crc kubenswrapper[4752]: I0122 11:55:14.551237 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56e6fc5e-c627-4cd3-a517-5f124cac7991-utilities" (OuterVolumeSpecName: "utilities") pod "56e6fc5e-c627-4cd3-a517-5f124cac7991" (UID: "56e6fc5e-c627-4cd3-a517-5f124cac7991"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:55:14 crc kubenswrapper[4752]: I0122 11:55:14.555120 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56e6fc5e-c627-4cd3-a517-5f124cac7991-kube-api-access-xg5vt" (OuterVolumeSpecName: "kube-api-access-xg5vt") pod "56e6fc5e-c627-4cd3-a517-5f124cac7991" (UID: "56e6fc5e-c627-4cd3-a517-5f124cac7991"). InnerVolumeSpecName "kube-api-access-xg5vt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:55:14 crc kubenswrapper[4752]: I0122 11:55:14.618580 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56e6fc5e-c627-4cd3-a517-5f124cac7991-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56e6fc5e-c627-4cd3-a517-5f124cac7991" (UID: "56e6fc5e-c627-4cd3-a517-5f124cac7991"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:55:14 crc kubenswrapper[4752]: I0122 11:55:14.651212 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56e6fc5e-c627-4cd3-a517-5f124cac7991-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:14 crc kubenswrapper[4752]: I0122 11:55:14.651245 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg5vt\" (UniqueName: \"kubernetes.io/projected/56e6fc5e-c627-4cd3-a517-5f124cac7991-kube-api-access-xg5vt\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:14 crc kubenswrapper[4752]: I0122 11:55:14.651259 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56e6fc5e-c627-4cd3-a517-5f124cac7991-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:15 crc kubenswrapper[4752]: I0122 11:55:15.085609 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7bbdg" event={"ID":"56e6fc5e-c627-4cd3-a517-5f124cac7991","Type":"ContainerDied","Data":"80148b32b5612af489336983088121491493a23017cbd3d193a4709d9d0fe35a"} Jan 22 11:55:15 crc kubenswrapper[4752]: I0122 11:55:15.085670 4752 scope.go:117] "RemoveContainer" containerID="589b849cd7fc1d88d306580c579142ca62762bfebc3bd3c9c9b10ce6a827249d" Jan 22 11:55:15 crc kubenswrapper[4752]: I0122 11:55:15.085720 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7bbdg" Jan 22 11:55:15 crc kubenswrapper[4752]: I0122 11:55:15.107512 4752 scope.go:117] "RemoveContainer" containerID="bd0f9b5cb0ab562dcb1a295446b312a2aa8fbb9af41e74fd0677e3ccb0497120" Jan 22 11:55:15 crc kubenswrapper[4752]: I0122 11:55:15.138705 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7bbdg"] Jan 22 11:55:15 crc kubenswrapper[4752]: I0122 11:55:15.154109 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7bbdg"] Jan 22 11:55:15 crc kubenswrapper[4752]: I0122 11:55:15.205335 4752 scope.go:117] "RemoveContainer" containerID="674416fd0fab68e6917ab16a02c450f58f5d82b718a677e65d38ba6c9f167315" Jan 22 11:55:17 crc kubenswrapper[4752]: I0122 11:55:17.116124 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56e6fc5e-c627-4cd3-a517-5f124cac7991" path="/var/lib/kubelet/pods/56e6fc5e-c627-4cd3-a517-5f124cac7991/volumes" Jan 22 11:55:21 crc kubenswrapper[4752]: I0122 11:55:21.108093 4752 scope.go:117] "RemoveContainer" containerID="d023cbf7eb4fc640c62a807be77c92bfa691962080a38d6f4b384116b06160a0" Jan 22 11:55:21 crc kubenswrapper[4752]: E0122 11:55:21.108972 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:55:33 crc kubenswrapper[4752]: I0122 11:55:33.098401 4752 scope.go:117] "RemoveContainer" containerID="d023cbf7eb4fc640c62a807be77c92bfa691962080a38d6f4b384116b06160a0" Jan 22 11:55:33 crc kubenswrapper[4752]: E0122 11:55:33.099294 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:55:46 crc kubenswrapper[4752]: I0122 11:55:46.098780 4752 scope.go:117] "RemoveContainer" containerID="d023cbf7eb4fc640c62a807be77c92bfa691962080a38d6f4b384116b06160a0" Jan 22 11:55:46 crc kubenswrapper[4752]: E0122 11:55:46.099726 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:55:58 crc kubenswrapper[4752]: I0122 11:55:58.098006 4752 scope.go:117] "RemoveContainer" containerID="d023cbf7eb4fc640c62a807be77c92bfa691962080a38d6f4b384116b06160a0" Jan 22 11:55:58 crc kubenswrapper[4752]: E0122 11:55:58.099045 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:56:13 crc kubenswrapper[4752]: I0122 11:56:13.098164 4752 scope.go:117] "RemoveContainer" containerID="d023cbf7eb4fc640c62a807be77c92bfa691962080a38d6f4b384116b06160a0" Jan 22 11:56:13 crc kubenswrapper[4752]: E0122 11:56:13.099049 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:56:24 crc kubenswrapper[4752]: I0122 11:56:24.099525 4752 scope.go:117] "RemoveContainer" containerID="d023cbf7eb4fc640c62a807be77c92bfa691962080a38d6f4b384116b06160a0" Jan 22 11:56:24 crc kubenswrapper[4752]: E0122 11:56:24.112480 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:56:37 crc kubenswrapper[4752]: I0122 11:56:37.098381 4752 scope.go:117] "RemoveContainer" containerID="d023cbf7eb4fc640c62a807be77c92bfa691962080a38d6f4b384116b06160a0" Jan 22 11:56:37 crc kubenswrapper[4752]: E0122 11:56:37.099647 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:56:50 crc kubenswrapper[4752]: I0122 11:56:50.098387 4752 scope.go:117] "RemoveContainer" containerID="d023cbf7eb4fc640c62a807be77c92bfa691962080a38d6f4b384116b06160a0" Jan 22 11:56:50 crc kubenswrapper[4752]: E0122 11:56:50.099551 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:57:03 crc kubenswrapper[4752]: I0122 11:57:03.098279 4752 scope.go:117] "RemoveContainer" containerID="d023cbf7eb4fc640c62a807be77c92bfa691962080a38d6f4b384116b06160a0" Jan 22 11:57:03 crc kubenswrapper[4752]: E0122 11:57:03.098945 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:57:18 crc kubenswrapper[4752]: I0122 11:57:18.098523 4752 scope.go:117] "RemoveContainer" containerID="d023cbf7eb4fc640c62a807be77c92bfa691962080a38d6f4b384116b06160a0" Jan 22 11:57:18 crc kubenswrapper[4752]: E0122 11:57:18.099748 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:57:30 crc kubenswrapper[4752]: I0122 11:57:30.098296 4752 scope.go:117] "RemoveContainer" containerID="d023cbf7eb4fc640c62a807be77c92bfa691962080a38d6f4b384116b06160a0" Jan 22 11:57:30 crc kubenswrapper[4752]: E0122 11:57:30.099211 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:57:43 crc kubenswrapper[4752]: I0122 11:57:43.097948 4752 scope.go:117] "RemoveContainer" containerID="d023cbf7eb4fc640c62a807be77c92bfa691962080a38d6f4b384116b06160a0" Jan 22 11:57:43 crc kubenswrapper[4752]: E0122 11:57:43.099818 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:57:55 crc kubenswrapper[4752]: I0122 11:57:55.098613 4752 scope.go:117] "RemoveContainer" containerID="d023cbf7eb4fc640c62a807be77c92bfa691962080a38d6f4b384116b06160a0" Jan 22 11:57:55 crc kubenswrapper[4752]: E0122 11:57:55.099992 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:58:09 crc kubenswrapper[4752]: I0122 11:58:09.097880 4752 scope.go:117] "RemoveContainer" containerID="d023cbf7eb4fc640c62a807be77c92bfa691962080a38d6f4b384116b06160a0" Jan 22 11:58:09 crc kubenswrapper[4752]: E0122 11:58:09.098653 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:58:24 crc kubenswrapper[4752]: I0122 11:58:24.099257 4752 scope.go:117] "RemoveContainer" containerID="d023cbf7eb4fc640c62a807be77c92bfa691962080a38d6f4b384116b06160a0" Jan 22 11:58:24 crc kubenswrapper[4752]: E0122 11:58:24.101054 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 11:58:33 crc kubenswrapper[4752]: I0122 11:58:33.620511 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-74sct"] Jan 22 11:58:33 crc kubenswrapper[4752]: E0122 11:58:33.621458 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56e6fc5e-c627-4cd3-a517-5f124cac7991" containerName="registry-server" Jan 22 11:58:33 crc kubenswrapper[4752]: I0122 11:58:33.621472 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="56e6fc5e-c627-4cd3-a517-5f124cac7991" containerName="registry-server" Jan 22 11:58:33 crc kubenswrapper[4752]: E0122 11:58:33.621486 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56e6fc5e-c627-4cd3-a517-5f124cac7991" containerName="extract-utilities" Jan 22 11:58:33 crc kubenswrapper[4752]: I0122 11:58:33.621493 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="56e6fc5e-c627-4cd3-a517-5f124cac7991" containerName="extract-utilities" Jan 22 11:58:33 crc kubenswrapper[4752]: E0122 11:58:33.621508 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56e6fc5e-c627-4cd3-a517-5f124cac7991" containerName="extract-content" Jan 22 11:58:33 crc kubenswrapper[4752]: I0122 11:58:33.621514 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="56e6fc5e-c627-4cd3-a517-5f124cac7991" containerName="extract-content" Jan 22 11:58:33 crc kubenswrapper[4752]: I0122 11:58:33.621716 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="56e6fc5e-c627-4cd3-a517-5f124cac7991" containerName="registry-server" Jan 22 11:58:33 crc kubenswrapper[4752]: I0122 11:58:33.623210 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74sct" Jan 22 11:58:33 crc kubenswrapper[4752]: I0122 11:58:33.640468 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-74sct"] Jan 22 11:58:33 crc kubenswrapper[4752]: I0122 11:58:33.732138 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4242a0fb-b31f-4453-8675-defabab589b3-catalog-content\") pod \"redhat-marketplace-74sct\" (UID: \"4242a0fb-b31f-4453-8675-defabab589b3\") " pod="openshift-marketplace/redhat-marketplace-74sct" Jan 22 11:58:33 crc kubenswrapper[4752]: I0122 11:58:33.732310 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4242a0fb-b31f-4453-8675-defabab589b3-utilities\") pod \"redhat-marketplace-74sct\" (UID: \"4242a0fb-b31f-4453-8675-defabab589b3\") " pod="openshift-marketplace/redhat-marketplace-74sct" Jan 22 11:58:33 crc kubenswrapper[4752]: I0122 11:58:33.732542 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw4mq\" (UniqueName: \"kubernetes.io/projected/4242a0fb-b31f-4453-8675-defabab589b3-kube-api-access-kw4mq\") pod \"redhat-marketplace-74sct\" (UID: \"4242a0fb-b31f-4453-8675-defabab589b3\") " pod="openshift-marketplace/redhat-marketplace-74sct" Jan 22 11:58:33 crc kubenswrapper[4752]: I0122 11:58:33.834236 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw4mq\" (UniqueName: \"kubernetes.io/projected/4242a0fb-b31f-4453-8675-defabab589b3-kube-api-access-kw4mq\") pod \"redhat-marketplace-74sct\" (UID: \"4242a0fb-b31f-4453-8675-defabab589b3\") " pod="openshift-marketplace/redhat-marketplace-74sct" Jan 22 11:58:33 crc kubenswrapper[4752]: I0122 11:58:33.834364 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4242a0fb-b31f-4453-8675-defabab589b3-catalog-content\") pod \"redhat-marketplace-74sct\" (UID: \"4242a0fb-b31f-4453-8675-defabab589b3\") " pod="openshift-marketplace/redhat-marketplace-74sct" Jan 22 11:58:33 crc kubenswrapper[4752]: I0122 11:58:33.834436 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4242a0fb-b31f-4453-8675-defabab589b3-utilities\") pod \"redhat-marketplace-74sct\" (UID: \"4242a0fb-b31f-4453-8675-defabab589b3\") " pod="openshift-marketplace/redhat-marketplace-74sct" Jan 22 11:58:33 crc kubenswrapper[4752]: I0122 11:58:33.835044 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4242a0fb-b31f-4453-8675-defabab589b3-utilities\") pod \"redhat-marketplace-74sct\" (UID: \"4242a0fb-b31f-4453-8675-defabab589b3\") " pod="openshift-marketplace/redhat-marketplace-74sct" Jan 22 11:58:33 crc kubenswrapper[4752]: I0122 11:58:33.835086 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4242a0fb-b31f-4453-8675-defabab589b3-catalog-content\") pod \"redhat-marketplace-74sct\" (UID: \"4242a0fb-b31f-4453-8675-defabab589b3\") " pod="openshift-marketplace/redhat-marketplace-74sct" Jan 22 11:58:33 crc kubenswrapper[4752]: I0122 11:58:33.856924 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw4mq\" (UniqueName: \"kubernetes.io/projected/4242a0fb-b31f-4453-8675-defabab589b3-kube-api-access-kw4mq\") pod \"redhat-marketplace-74sct\" (UID: \"4242a0fb-b31f-4453-8675-defabab589b3\") " pod="openshift-marketplace/redhat-marketplace-74sct" Jan 22 11:58:33 crc kubenswrapper[4752]: I0122 11:58:33.950038 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74sct" Jan 22 11:58:34 crc kubenswrapper[4752]: I0122 11:58:34.468501 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-74sct"] Jan 22 11:58:35 crc kubenswrapper[4752]: I0122 11:58:35.146112 4752 generic.go:334] "Generic (PLEG): container finished" podID="4242a0fb-b31f-4453-8675-defabab589b3" containerID="89211664c2b268fefc6e3e56b3e05d4f9f7bfe73b40abb5a1e5faeece838d772" exitCode=0 Jan 22 11:58:35 crc kubenswrapper[4752]: I0122 11:58:35.146174 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74sct" event={"ID":"4242a0fb-b31f-4453-8675-defabab589b3","Type":"ContainerDied","Data":"89211664c2b268fefc6e3e56b3e05d4f9f7bfe73b40abb5a1e5faeece838d772"} Jan 22 11:58:35 crc kubenswrapper[4752]: I0122 11:58:35.146231 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74sct" event={"ID":"4242a0fb-b31f-4453-8675-defabab589b3","Type":"ContainerStarted","Data":"8c6badfda769e96a227c211ee3d4efe283f1c1db7a82a7afc3d18cbc23ab8845"} Jan 22 11:58:35 crc kubenswrapper[4752]: I0122 11:58:35.149473 4752 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 11:58:37 crc kubenswrapper[4752]: I0122 11:58:37.168934 4752 generic.go:334] "Generic (PLEG): container finished" podID="4242a0fb-b31f-4453-8675-defabab589b3" containerID="15a3eb9abe5b0557a9085dcb447cc441b51054ce04f9db2549e135ccffb7d89d" exitCode=0 Jan 22 11:58:37 crc kubenswrapper[4752]: I0122 11:58:37.168969 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74sct" event={"ID":"4242a0fb-b31f-4453-8675-defabab589b3","Type":"ContainerDied","Data":"15a3eb9abe5b0557a9085dcb447cc441b51054ce04f9db2549e135ccffb7d89d"} Jan 22 11:58:39 crc kubenswrapper[4752]: I0122 11:58:39.098214 4752 scope.go:117] "RemoveContainer" containerID="d023cbf7eb4fc640c62a807be77c92bfa691962080a38d6f4b384116b06160a0" Jan 22 11:58:39 crc kubenswrapper[4752]: I0122 11:58:39.196818 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74sct" event={"ID":"4242a0fb-b31f-4453-8675-defabab589b3","Type":"ContainerStarted","Data":"cbb126d4e72eab201ccf58c697196205a179a695f6c01de775772f4933816f22"} Jan 22 11:58:39 crc kubenswrapper[4752]: I0122 11:58:39.232547 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-74sct" podStartSLOduration=3.819444371 podStartE2EDuration="6.232526256s" podCreationTimestamp="2026-01-22 11:58:33 +0000 UTC" firstStartedPulling="2026-01-22 11:58:35.149018767 +0000 UTC m=+5594.378961685" lastFinishedPulling="2026-01-22 11:58:37.562100622 +0000 UTC m=+5596.792043570" observedRunningTime="2026-01-22 11:58:39.216382365 +0000 UTC m=+5598.446325283" watchObservedRunningTime="2026-01-22 11:58:39.232526256 +0000 UTC m=+5598.462469164" Jan 22 11:58:40 crc kubenswrapper[4752]: I0122 11:58:40.208941 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerStarted","Data":"74ee9c42cf450447dce2eb7920a57efde1b27901b18fbef9d2c5637d0ec04e91"} Jan 22 11:58:43 crc kubenswrapper[4752]: I0122 11:58:43.950673 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-74sct" Jan 22 11:58:43 crc kubenswrapper[4752]: I0122 11:58:43.951181 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-74sct" Jan 22 11:58:44 crc kubenswrapper[4752]: I0122 11:58:44.018006 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-74sct" Jan 22 11:58:44 crc kubenswrapper[4752]: I0122 11:58:44.297585 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-74sct" Jan 22 11:58:44 crc kubenswrapper[4752]: I0122 11:58:44.350840 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-74sct"] Jan 22 11:58:46 crc kubenswrapper[4752]: I0122 11:58:46.269362 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-74sct" podUID="4242a0fb-b31f-4453-8675-defabab589b3" containerName="registry-server" containerID="cri-o://cbb126d4e72eab201ccf58c697196205a179a695f6c01de775772f4933816f22" gracePeriod=2 Jan 22 11:58:46 crc kubenswrapper[4752]: I0122 11:58:46.746021 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74sct" Jan 22 11:58:46 crc kubenswrapper[4752]: I0122 11:58:46.840630 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4242a0fb-b31f-4453-8675-defabab589b3-utilities\") pod \"4242a0fb-b31f-4453-8675-defabab589b3\" (UID: \"4242a0fb-b31f-4453-8675-defabab589b3\") " Jan 22 11:58:46 crc kubenswrapper[4752]: I0122 11:58:46.840714 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4242a0fb-b31f-4453-8675-defabab589b3-catalog-content\") pod \"4242a0fb-b31f-4453-8675-defabab589b3\" (UID: \"4242a0fb-b31f-4453-8675-defabab589b3\") " Jan 22 11:58:46 crc kubenswrapper[4752]: I0122 11:58:46.840840 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw4mq\" (UniqueName: \"kubernetes.io/projected/4242a0fb-b31f-4453-8675-defabab589b3-kube-api-access-kw4mq\") pod \"4242a0fb-b31f-4453-8675-defabab589b3\" (UID: \"4242a0fb-b31f-4453-8675-defabab589b3\") " Jan 22 11:58:46 crc kubenswrapper[4752]: I0122 11:58:46.841919 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4242a0fb-b31f-4453-8675-defabab589b3-utilities" (OuterVolumeSpecName: "utilities") pod "4242a0fb-b31f-4453-8675-defabab589b3" (UID: "4242a0fb-b31f-4453-8675-defabab589b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:58:46 crc kubenswrapper[4752]: I0122 11:58:46.865098 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4242a0fb-b31f-4453-8675-defabab589b3-kube-api-access-kw4mq" (OuterVolumeSpecName: "kube-api-access-kw4mq") pod "4242a0fb-b31f-4453-8675-defabab589b3" (UID: "4242a0fb-b31f-4453-8675-defabab589b3"). InnerVolumeSpecName "kube-api-access-kw4mq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:58:46 crc kubenswrapper[4752]: I0122 11:58:46.883250 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4242a0fb-b31f-4453-8675-defabab589b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4242a0fb-b31f-4453-8675-defabab589b3" (UID: "4242a0fb-b31f-4453-8675-defabab589b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:58:46 crc kubenswrapper[4752]: I0122 11:58:46.942993 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4242a0fb-b31f-4453-8675-defabab589b3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:58:46 crc kubenswrapper[4752]: I0122 11:58:46.943032 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kw4mq\" (UniqueName: \"kubernetes.io/projected/4242a0fb-b31f-4453-8675-defabab589b3-kube-api-access-kw4mq\") on node \"crc\" DevicePath \"\"" Jan 22 11:58:46 crc kubenswrapper[4752]: I0122 11:58:46.943046 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4242a0fb-b31f-4453-8675-defabab589b3-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:58:47 crc kubenswrapper[4752]: I0122 11:58:47.282013 4752 generic.go:334] "Generic (PLEG): container finished" podID="4242a0fb-b31f-4453-8675-defabab589b3" containerID="cbb126d4e72eab201ccf58c697196205a179a695f6c01de775772f4933816f22" exitCode=0 Jan 22 11:58:47 crc kubenswrapper[4752]: I0122 11:58:47.282072 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74sct" Jan 22 11:58:47 crc kubenswrapper[4752]: I0122 11:58:47.282076 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74sct" event={"ID":"4242a0fb-b31f-4453-8675-defabab589b3","Type":"ContainerDied","Data":"cbb126d4e72eab201ccf58c697196205a179a695f6c01de775772f4933816f22"} Jan 22 11:58:47 crc kubenswrapper[4752]: I0122 11:58:47.282253 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74sct" event={"ID":"4242a0fb-b31f-4453-8675-defabab589b3","Type":"ContainerDied","Data":"8c6badfda769e96a227c211ee3d4efe283f1c1db7a82a7afc3d18cbc23ab8845"} Jan 22 11:58:47 crc kubenswrapper[4752]: I0122 11:58:47.282285 4752 scope.go:117] "RemoveContainer" containerID="cbb126d4e72eab201ccf58c697196205a179a695f6c01de775772f4933816f22" Jan 22 11:58:47 crc kubenswrapper[4752]: I0122 11:58:47.324140 4752 scope.go:117] "RemoveContainer" containerID="15a3eb9abe5b0557a9085dcb447cc441b51054ce04f9db2549e135ccffb7d89d" Jan 22 11:58:47 crc kubenswrapper[4752]: I0122 11:58:47.324403 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-74sct"] Jan 22 11:58:47 crc kubenswrapper[4752]: I0122 11:58:47.333553 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-74sct"] Jan 22 11:58:47 crc kubenswrapper[4752]: I0122 11:58:47.347654 4752 scope.go:117] "RemoveContainer" containerID="89211664c2b268fefc6e3e56b3e05d4f9f7bfe73b40abb5a1e5faeece838d772" Jan 22 11:58:47 crc kubenswrapper[4752]: I0122 11:58:47.394539 4752 scope.go:117] "RemoveContainer" containerID="cbb126d4e72eab201ccf58c697196205a179a695f6c01de775772f4933816f22" Jan 22 11:58:47 crc kubenswrapper[4752]: E0122 11:58:47.395023 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbb126d4e72eab201ccf58c697196205a179a695f6c01de775772f4933816f22\": container with ID starting with cbb126d4e72eab201ccf58c697196205a179a695f6c01de775772f4933816f22 not found: ID does not exist" containerID="cbb126d4e72eab201ccf58c697196205a179a695f6c01de775772f4933816f22" Jan 22 11:58:47 crc kubenswrapper[4752]: I0122 11:58:47.395073 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbb126d4e72eab201ccf58c697196205a179a695f6c01de775772f4933816f22"} err="failed to get container status \"cbb126d4e72eab201ccf58c697196205a179a695f6c01de775772f4933816f22\": rpc error: code = NotFound desc = could not find container \"cbb126d4e72eab201ccf58c697196205a179a695f6c01de775772f4933816f22\": container with ID starting with cbb126d4e72eab201ccf58c697196205a179a695f6c01de775772f4933816f22 not found: ID does not exist" Jan 22 11:58:47 crc kubenswrapper[4752]: I0122 11:58:47.395100 4752 scope.go:117] "RemoveContainer" containerID="15a3eb9abe5b0557a9085dcb447cc441b51054ce04f9db2549e135ccffb7d89d" Jan 22 11:58:47 crc kubenswrapper[4752]: E0122 11:58:47.395439 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15a3eb9abe5b0557a9085dcb447cc441b51054ce04f9db2549e135ccffb7d89d\": container with ID starting with 15a3eb9abe5b0557a9085dcb447cc441b51054ce04f9db2549e135ccffb7d89d not found: ID does not exist" containerID="15a3eb9abe5b0557a9085dcb447cc441b51054ce04f9db2549e135ccffb7d89d" Jan 22 11:58:47 crc kubenswrapper[4752]: I0122 11:58:47.395489 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15a3eb9abe5b0557a9085dcb447cc441b51054ce04f9db2549e135ccffb7d89d"} err="failed to get container status \"15a3eb9abe5b0557a9085dcb447cc441b51054ce04f9db2549e135ccffb7d89d\": rpc error: code = NotFound desc = could not find container \"15a3eb9abe5b0557a9085dcb447cc441b51054ce04f9db2549e135ccffb7d89d\": container with ID starting with 15a3eb9abe5b0557a9085dcb447cc441b51054ce04f9db2549e135ccffb7d89d not found: ID does not exist" Jan 22 11:58:47 crc kubenswrapper[4752]: I0122 11:58:47.395522 4752 scope.go:117] "RemoveContainer" containerID="89211664c2b268fefc6e3e56b3e05d4f9f7bfe73b40abb5a1e5faeece838d772" Jan 22 11:58:47 crc kubenswrapper[4752]: E0122 11:58:47.397441 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89211664c2b268fefc6e3e56b3e05d4f9f7bfe73b40abb5a1e5faeece838d772\": container with ID starting with 89211664c2b268fefc6e3e56b3e05d4f9f7bfe73b40abb5a1e5faeece838d772 not found: ID does not exist" containerID="89211664c2b268fefc6e3e56b3e05d4f9f7bfe73b40abb5a1e5faeece838d772" Jan 22 11:58:47 crc kubenswrapper[4752]: I0122 11:58:47.397477 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89211664c2b268fefc6e3e56b3e05d4f9f7bfe73b40abb5a1e5faeece838d772"} err="failed to get container status \"89211664c2b268fefc6e3e56b3e05d4f9f7bfe73b40abb5a1e5faeece838d772\": rpc error: code = NotFound desc = could not find container \"89211664c2b268fefc6e3e56b3e05d4f9f7bfe73b40abb5a1e5faeece838d772\": container with ID starting with 89211664c2b268fefc6e3e56b3e05d4f9f7bfe73b40abb5a1e5faeece838d772 not found: ID does not exist" Jan 22 11:58:49 crc kubenswrapper[4752]: I0122 11:58:49.109767 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4242a0fb-b31f-4453-8675-defabab589b3" path="/var/lib/kubelet/pods/4242a0fb-b31f-4453-8675-defabab589b3/volumes" Jan 22 12:00:00 crc kubenswrapper[4752]: I0122 12:00:00.151186 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484720-nqpsh"] Jan 22 12:00:00 crc kubenswrapper[4752]: E0122 12:00:00.152180 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4242a0fb-b31f-4453-8675-defabab589b3" containerName="registry-server" Jan 22 12:00:00 crc kubenswrapper[4752]: I0122 12:00:00.152195 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="4242a0fb-b31f-4453-8675-defabab589b3" containerName="registry-server" Jan 22 12:00:00 crc kubenswrapper[4752]: E0122 12:00:00.152206 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4242a0fb-b31f-4453-8675-defabab589b3" containerName="extract-utilities" Jan 22 12:00:00 crc kubenswrapper[4752]: I0122 12:00:00.152212 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="4242a0fb-b31f-4453-8675-defabab589b3" containerName="extract-utilities" Jan 22 12:00:00 crc kubenswrapper[4752]: E0122 12:00:00.152251 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4242a0fb-b31f-4453-8675-defabab589b3" containerName="extract-content" Jan 22 12:00:00 crc kubenswrapper[4752]: I0122 12:00:00.152257 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="4242a0fb-b31f-4453-8675-defabab589b3" containerName="extract-content" Jan 22 12:00:00 crc kubenswrapper[4752]: I0122 12:00:00.152463 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="4242a0fb-b31f-4453-8675-defabab589b3" containerName="registry-server" Jan 22 12:00:00 crc kubenswrapper[4752]: I0122 12:00:00.153327 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-nqpsh" Jan 22 12:00:00 crc kubenswrapper[4752]: I0122 12:00:00.156732 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 12:00:00 crc kubenswrapper[4752]: I0122 12:00:00.157097 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 12:00:00 crc kubenswrapper[4752]: I0122 12:00:00.164686 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484720-nqpsh"] Jan 22 12:00:00 crc kubenswrapper[4752]: I0122 12:00:00.303394 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95908416-e2d3-4199-a63e-9c20ece42c55-secret-volume\") pod \"collect-profiles-29484720-nqpsh\" (UID: \"95908416-e2d3-4199-a63e-9c20ece42c55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-nqpsh" Jan 22 12:00:00 crc kubenswrapper[4752]: I0122 12:00:00.303448 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rkk7\" (UniqueName: \"kubernetes.io/projected/95908416-e2d3-4199-a63e-9c20ece42c55-kube-api-access-9rkk7\") pod \"collect-profiles-29484720-nqpsh\" (UID: \"95908416-e2d3-4199-a63e-9c20ece42c55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-nqpsh" Jan 22 12:00:00 crc kubenswrapper[4752]: I0122 12:00:00.303547 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95908416-e2d3-4199-a63e-9c20ece42c55-config-volume\") pod \"collect-profiles-29484720-nqpsh\" (UID: \"95908416-e2d3-4199-a63e-9c20ece42c55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-nqpsh" Jan 22 12:00:00 crc kubenswrapper[4752]: I0122 12:00:00.405166 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95908416-e2d3-4199-a63e-9c20ece42c55-secret-volume\") pod \"collect-profiles-29484720-nqpsh\" (UID: \"95908416-e2d3-4199-a63e-9c20ece42c55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-nqpsh" Jan 22 12:00:00 crc kubenswrapper[4752]: I0122 12:00:00.405421 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rkk7\" (UniqueName: \"kubernetes.io/projected/95908416-e2d3-4199-a63e-9c20ece42c55-kube-api-access-9rkk7\") pod \"collect-profiles-29484720-nqpsh\" (UID: \"95908416-e2d3-4199-a63e-9c20ece42c55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-nqpsh" Jan 22 12:00:00 crc kubenswrapper[4752]: I0122 12:00:00.405540 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95908416-e2d3-4199-a63e-9c20ece42c55-config-volume\") pod \"collect-profiles-29484720-nqpsh\" (UID: \"95908416-e2d3-4199-a63e-9c20ece42c55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-nqpsh" Jan 22 12:00:00 crc kubenswrapper[4752]: I0122 12:00:00.406359 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95908416-e2d3-4199-a63e-9c20ece42c55-config-volume\") pod \"collect-profiles-29484720-nqpsh\" (UID: \"95908416-e2d3-4199-a63e-9c20ece42c55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-nqpsh" Jan 22 12:00:00 crc kubenswrapper[4752]: I0122 12:00:00.410658 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95908416-e2d3-4199-a63e-9c20ece42c55-secret-volume\") pod \"collect-profiles-29484720-nqpsh\" (UID: \"95908416-e2d3-4199-a63e-9c20ece42c55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-nqpsh" Jan 22 12:00:00 crc kubenswrapper[4752]: I0122 12:00:00.431808 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rkk7\" (UniqueName: \"kubernetes.io/projected/95908416-e2d3-4199-a63e-9c20ece42c55-kube-api-access-9rkk7\") pod \"collect-profiles-29484720-nqpsh\" (UID: \"95908416-e2d3-4199-a63e-9c20ece42c55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-nqpsh" Jan 22 12:00:00 crc kubenswrapper[4752]: I0122 12:00:00.477123 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-nqpsh" Jan 22 12:00:00 crc kubenswrapper[4752]: I0122 12:00:00.938937 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484720-nqpsh"] Jan 22 12:00:00 crc kubenswrapper[4752]: I0122 12:00:00.992212 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-nqpsh" event={"ID":"95908416-e2d3-4199-a63e-9c20ece42c55","Type":"ContainerStarted","Data":"f8b9666535f7c9cbdf09f4026703b3ab8eb1e2e11e5a0cba7ae31f66ac8643f7"} Jan 22 12:00:02 crc kubenswrapper[4752]: I0122 12:00:02.007561 4752 generic.go:334] "Generic (PLEG): container finished" podID="95908416-e2d3-4199-a63e-9c20ece42c55" containerID="8a54b53ce6a76afa2ba773e5c6c10a3a6c01d568cb932d6537d277a37142c1ae" exitCode=0 Jan 22 12:00:02 crc kubenswrapper[4752]: I0122 12:00:02.007626 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-nqpsh" event={"ID":"95908416-e2d3-4199-a63e-9c20ece42c55","Type":"ContainerDied","Data":"8a54b53ce6a76afa2ba773e5c6c10a3a6c01d568cb932d6537d277a37142c1ae"} Jan 22 12:00:03 crc kubenswrapper[4752]: I0122 12:00:03.497707 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-nqpsh" Jan 22 12:00:03 crc kubenswrapper[4752]: I0122 12:00:03.681181 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95908416-e2d3-4199-a63e-9c20ece42c55-config-volume\") pod \"95908416-e2d3-4199-a63e-9c20ece42c55\" (UID: \"95908416-e2d3-4199-a63e-9c20ece42c55\") " Jan 22 12:00:03 crc kubenswrapper[4752]: I0122 12:00:03.681407 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95908416-e2d3-4199-a63e-9c20ece42c55-secret-volume\") pod \"95908416-e2d3-4199-a63e-9c20ece42c55\" (UID: \"95908416-e2d3-4199-a63e-9c20ece42c55\") " Jan 22 12:00:03 crc kubenswrapper[4752]: I0122 12:00:03.681511 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rkk7\" (UniqueName: \"kubernetes.io/projected/95908416-e2d3-4199-a63e-9c20ece42c55-kube-api-access-9rkk7\") pod \"95908416-e2d3-4199-a63e-9c20ece42c55\" (UID: \"95908416-e2d3-4199-a63e-9c20ece42c55\") " Jan 22 12:00:03 crc kubenswrapper[4752]: I0122 12:00:03.682319 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95908416-e2d3-4199-a63e-9c20ece42c55-config-volume" (OuterVolumeSpecName: "config-volume") pod "95908416-e2d3-4199-a63e-9c20ece42c55" (UID: "95908416-e2d3-4199-a63e-9c20ece42c55"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 12:00:03 crc kubenswrapper[4752]: I0122 12:00:03.688938 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95908416-e2d3-4199-a63e-9c20ece42c55-kube-api-access-9rkk7" (OuterVolumeSpecName: "kube-api-access-9rkk7") pod "95908416-e2d3-4199-a63e-9c20ece42c55" (UID: "95908416-e2d3-4199-a63e-9c20ece42c55"). InnerVolumeSpecName "kube-api-access-9rkk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:00:03 crc kubenswrapper[4752]: I0122 12:00:03.689354 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95908416-e2d3-4199-a63e-9c20ece42c55-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "95908416-e2d3-4199-a63e-9c20ece42c55" (UID: "95908416-e2d3-4199-a63e-9c20ece42c55"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 12:00:03 crc kubenswrapper[4752]: I0122 12:00:03.784092 4752 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95908416-e2d3-4199-a63e-9c20ece42c55-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:03 crc kubenswrapper[4752]: I0122 12:00:03.784143 4752 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95908416-e2d3-4199-a63e-9c20ece42c55-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:03 crc kubenswrapper[4752]: I0122 12:00:03.784159 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rkk7\" (UniqueName: \"kubernetes.io/projected/95908416-e2d3-4199-a63e-9c20ece42c55-kube-api-access-9rkk7\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:04 crc kubenswrapper[4752]: I0122 12:00:04.026344 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-nqpsh" event={"ID":"95908416-e2d3-4199-a63e-9c20ece42c55","Type":"ContainerDied","Data":"f8b9666535f7c9cbdf09f4026703b3ab8eb1e2e11e5a0cba7ae31f66ac8643f7"} Jan 22 12:00:04 crc kubenswrapper[4752]: I0122 12:00:04.026386 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8b9666535f7c9cbdf09f4026703b3ab8eb1e2e11e5a0cba7ae31f66ac8643f7" Jan 22 12:00:04 crc kubenswrapper[4752]: I0122 12:00:04.026411 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-nqpsh" Jan 22 12:00:04 crc kubenswrapper[4752]: I0122 12:00:04.586425 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484675-mmbvs"] Jan 22 12:00:04 crc kubenswrapper[4752]: I0122 12:00:04.595323 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484675-mmbvs"] Jan 22 12:00:05 crc kubenswrapper[4752]: I0122 12:00:05.118351 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d32bf1f-86e9-4c48-8dbd-54ee831d235c" path="/var/lib/kubelet/pods/1d32bf1f-86e9-4c48-8dbd-54ee831d235c/volumes" Jan 22 12:00:48 crc kubenswrapper[4752]: I0122 12:00:48.335138 4752 scope.go:117] "RemoveContainer" containerID="96f2a1aa59972e4fb850775573b797df2c4d7369a0e1bed1b5506d5d3adbfdc2" Jan 22 12:00:57 crc kubenswrapper[4752]: I0122 12:00:57.724288 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:00:57 crc kubenswrapper[4752]: I0122 12:00:57.725159 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:01:00 crc kubenswrapper[4752]: I0122 12:01:00.158671 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29484721-p4lnb"] Jan 22 12:01:00 crc kubenswrapper[4752]: E0122 12:01:00.159457 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95908416-e2d3-4199-a63e-9c20ece42c55" containerName="collect-profiles" Jan 22 12:01:00 crc kubenswrapper[4752]: I0122 12:01:00.159674 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="95908416-e2d3-4199-a63e-9c20ece42c55" containerName="collect-profiles" Jan 22 12:01:00 crc kubenswrapper[4752]: I0122 12:01:00.159963 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="95908416-e2d3-4199-a63e-9c20ece42c55" containerName="collect-profiles" Jan 22 12:01:00 crc kubenswrapper[4752]: I0122 12:01:00.160778 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29484721-p4lnb" Jan 22 12:01:00 crc kubenswrapper[4752]: I0122 12:01:00.175167 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29484721-p4lnb"] Jan 22 12:01:00 crc kubenswrapper[4752]: I0122 12:01:00.268496 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vs54\" (UniqueName: \"kubernetes.io/projected/0c01b187-2554-4bce-b3d4-154403015e11-kube-api-access-9vs54\") pod \"keystone-cron-29484721-p4lnb\" (UID: \"0c01b187-2554-4bce-b3d4-154403015e11\") " pod="openstack/keystone-cron-29484721-p4lnb" Jan 22 12:01:00 crc kubenswrapper[4752]: I0122 12:01:00.268912 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c01b187-2554-4bce-b3d4-154403015e11-config-data\") pod \"keystone-cron-29484721-p4lnb\" (UID: \"0c01b187-2554-4bce-b3d4-154403015e11\") " pod="openstack/keystone-cron-29484721-p4lnb" Jan 22 12:01:00 crc kubenswrapper[4752]: I0122 12:01:00.268990 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c01b187-2554-4bce-b3d4-154403015e11-combined-ca-bundle\") pod \"keystone-cron-29484721-p4lnb\" (UID: \"0c01b187-2554-4bce-b3d4-154403015e11\") " pod="openstack/keystone-cron-29484721-p4lnb" Jan 22 12:01:00 crc kubenswrapper[4752]: I0122 12:01:00.269034 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c01b187-2554-4bce-b3d4-154403015e11-fernet-keys\") pod \"keystone-cron-29484721-p4lnb\" (UID: \"0c01b187-2554-4bce-b3d4-154403015e11\") " pod="openstack/keystone-cron-29484721-p4lnb" Jan 22 12:01:00 crc kubenswrapper[4752]: I0122 12:01:00.371443 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c01b187-2554-4bce-b3d4-154403015e11-config-data\") pod \"keystone-cron-29484721-p4lnb\" (UID: \"0c01b187-2554-4bce-b3d4-154403015e11\") " pod="openstack/keystone-cron-29484721-p4lnb" Jan 22 12:01:00 crc kubenswrapper[4752]: I0122 12:01:00.371498 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c01b187-2554-4bce-b3d4-154403015e11-combined-ca-bundle\") pod \"keystone-cron-29484721-p4lnb\" (UID: \"0c01b187-2554-4bce-b3d4-154403015e11\") " pod="openstack/keystone-cron-29484721-p4lnb" Jan 22 12:01:00 crc kubenswrapper[4752]: I0122 12:01:00.371522 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c01b187-2554-4bce-b3d4-154403015e11-fernet-keys\") pod \"keystone-cron-29484721-p4lnb\" (UID: \"0c01b187-2554-4bce-b3d4-154403015e11\") " pod="openstack/keystone-cron-29484721-p4lnb" Jan 22 12:01:00 crc kubenswrapper[4752]: I0122 12:01:00.371569 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vs54\" (UniqueName: \"kubernetes.io/projected/0c01b187-2554-4bce-b3d4-154403015e11-kube-api-access-9vs54\") pod \"keystone-cron-29484721-p4lnb\" (UID: \"0c01b187-2554-4bce-b3d4-154403015e11\") " pod="openstack/keystone-cron-29484721-p4lnb" Jan 22 12:01:00 crc kubenswrapper[4752]: I0122 12:01:00.379797 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c01b187-2554-4bce-b3d4-154403015e11-config-data\") pod \"keystone-cron-29484721-p4lnb\" (UID: \"0c01b187-2554-4bce-b3d4-154403015e11\") " pod="openstack/keystone-cron-29484721-p4lnb" Jan 22 12:01:00 crc kubenswrapper[4752]: I0122 12:01:00.383598 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c01b187-2554-4bce-b3d4-154403015e11-combined-ca-bundle\") pod \"keystone-cron-29484721-p4lnb\" (UID: \"0c01b187-2554-4bce-b3d4-154403015e11\") " pod="openstack/keystone-cron-29484721-p4lnb" Jan 22 12:01:00 crc kubenswrapper[4752]: I0122 12:01:00.392184 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c01b187-2554-4bce-b3d4-154403015e11-fernet-keys\") pod \"keystone-cron-29484721-p4lnb\" (UID: \"0c01b187-2554-4bce-b3d4-154403015e11\") " pod="openstack/keystone-cron-29484721-p4lnb" Jan 22 12:01:00 crc kubenswrapper[4752]: I0122 12:01:00.395491 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vs54\" (UniqueName: \"kubernetes.io/projected/0c01b187-2554-4bce-b3d4-154403015e11-kube-api-access-9vs54\") pod \"keystone-cron-29484721-p4lnb\" (UID: \"0c01b187-2554-4bce-b3d4-154403015e11\") " pod="openstack/keystone-cron-29484721-p4lnb" Jan 22 12:01:00 crc kubenswrapper[4752]: I0122 12:01:00.490917 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29484721-p4lnb" Jan 22 12:01:00 crc kubenswrapper[4752]: I0122 12:01:00.976327 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29484721-p4lnb"] Jan 22 12:01:01 crc kubenswrapper[4752]: I0122 12:01:01.594703 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29484721-p4lnb" event={"ID":"0c01b187-2554-4bce-b3d4-154403015e11","Type":"ContainerStarted","Data":"53b63ab831b1f9b76d9873bba88b263ed9252ada1c0445f040a3ea3e63709854"} Jan 22 12:01:01 crc kubenswrapper[4752]: I0122 12:01:01.595091 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29484721-p4lnb" event={"ID":"0c01b187-2554-4bce-b3d4-154403015e11","Type":"ContainerStarted","Data":"058dc51de9b3c45aa8fabd2b58bd2ad159e418dc667a6bea44273257c9e220bd"} Jan 22 12:01:01 crc kubenswrapper[4752]: I0122 12:01:01.617812 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29484721-p4lnb" podStartSLOduration=1.6177926409999999 podStartE2EDuration="1.617792641s" podCreationTimestamp="2026-01-22 12:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 12:01:01.609951456 +0000 UTC m=+5740.839894364" watchObservedRunningTime="2026-01-22 12:01:01.617792641 +0000 UTC m=+5740.847735539" Jan 22 12:01:05 crc kubenswrapper[4752]: I0122 12:01:05.643886 4752 generic.go:334] "Generic (PLEG): container finished" podID="0c01b187-2554-4bce-b3d4-154403015e11" containerID="53b63ab831b1f9b76d9873bba88b263ed9252ada1c0445f040a3ea3e63709854" exitCode=0 Jan 22 12:01:05 crc kubenswrapper[4752]: I0122 12:01:05.643999 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29484721-p4lnb" event={"ID":"0c01b187-2554-4bce-b3d4-154403015e11","Type":"ContainerDied","Data":"53b63ab831b1f9b76d9873bba88b263ed9252ada1c0445f040a3ea3e63709854"} Jan 22 12:01:07 crc kubenswrapper[4752]: I0122 12:01:07.050303 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29484721-p4lnb" Jan 22 12:01:07 crc kubenswrapper[4752]: I0122 12:01:07.216088 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vs54\" (UniqueName: \"kubernetes.io/projected/0c01b187-2554-4bce-b3d4-154403015e11-kube-api-access-9vs54\") pod \"0c01b187-2554-4bce-b3d4-154403015e11\" (UID: \"0c01b187-2554-4bce-b3d4-154403015e11\") " Jan 22 12:01:07 crc kubenswrapper[4752]: I0122 12:01:07.216232 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c01b187-2554-4bce-b3d4-154403015e11-fernet-keys\") pod \"0c01b187-2554-4bce-b3d4-154403015e11\" (UID: \"0c01b187-2554-4bce-b3d4-154403015e11\") " Jan 22 12:01:07 crc kubenswrapper[4752]: I0122 12:01:07.216306 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c01b187-2554-4bce-b3d4-154403015e11-config-data\") pod \"0c01b187-2554-4bce-b3d4-154403015e11\" (UID: \"0c01b187-2554-4bce-b3d4-154403015e11\") " Jan 22 12:01:07 crc kubenswrapper[4752]: I0122 12:01:07.216526 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c01b187-2554-4bce-b3d4-154403015e11-combined-ca-bundle\") pod \"0c01b187-2554-4bce-b3d4-154403015e11\" (UID: \"0c01b187-2554-4bce-b3d4-154403015e11\") " Jan 22 12:01:07 crc kubenswrapper[4752]: I0122 12:01:07.224069 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c01b187-2554-4bce-b3d4-154403015e11-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0c01b187-2554-4bce-b3d4-154403015e11" (UID: "0c01b187-2554-4bce-b3d4-154403015e11"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 12:01:07 crc kubenswrapper[4752]: I0122 12:01:07.241141 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c01b187-2554-4bce-b3d4-154403015e11-kube-api-access-9vs54" (OuterVolumeSpecName: "kube-api-access-9vs54") pod "0c01b187-2554-4bce-b3d4-154403015e11" (UID: "0c01b187-2554-4bce-b3d4-154403015e11"). InnerVolumeSpecName "kube-api-access-9vs54". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:01:07 crc kubenswrapper[4752]: I0122 12:01:07.276752 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c01b187-2554-4bce-b3d4-154403015e11-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c01b187-2554-4bce-b3d4-154403015e11" (UID: "0c01b187-2554-4bce-b3d4-154403015e11"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 12:01:07 crc kubenswrapper[4752]: I0122 12:01:07.289964 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c01b187-2554-4bce-b3d4-154403015e11-config-data" (OuterVolumeSpecName: "config-data") pod "0c01b187-2554-4bce-b3d4-154403015e11" (UID: "0c01b187-2554-4bce-b3d4-154403015e11"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 12:01:07 crc kubenswrapper[4752]: I0122 12:01:07.319346 4752 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c01b187-2554-4bce-b3d4-154403015e11-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:07 crc kubenswrapper[4752]: I0122 12:01:07.319378 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vs54\" (UniqueName: \"kubernetes.io/projected/0c01b187-2554-4bce-b3d4-154403015e11-kube-api-access-9vs54\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:07 crc kubenswrapper[4752]: I0122 12:01:07.319391 4752 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c01b187-2554-4bce-b3d4-154403015e11-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:07 crc kubenswrapper[4752]: I0122 12:01:07.319402 4752 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c01b187-2554-4bce-b3d4-154403015e11-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:07 crc kubenswrapper[4752]: I0122 12:01:07.663851 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29484721-p4lnb" event={"ID":"0c01b187-2554-4bce-b3d4-154403015e11","Type":"ContainerDied","Data":"058dc51de9b3c45aa8fabd2b58bd2ad159e418dc667a6bea44273257c9e220bd"} Jan 22 12:01:07 crc kubenswrapper[4752]: I0122 12:01:07.663912 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="058dc51de9b3c45aa8fabd2b58bd2ad159e418dc667a6bea44273257c9e220bd" Jan 22 12:01:07 crc kubenswrapper[4752]: I0122 12:01:07.664036 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29484721-p4lnb" Jan 22 12:01:27 crc kubenswrapper[4752]: I0122 12:01:27.723492 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:01:27 crc kubenswrapper[4752]: I0122 12:01:27.724062 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:01:51 crc kubenswrapper[4752]: I0122 12:01:51.296746 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jfbtd"] Jan 22 12:01:51 crc kubenswrapper[4752]: E0122 12:01:51.298774 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c01b187-2554-4bce-b3d4-154403015e11" containerName="keystone-cron" Jan 22 12:01:51 crc kubenswrapper[4752]: I0122 12:01:51.298792 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c01b187-2554-4bce-b3d4-154403015e11" containerName="keystone-cron" Jan 22 12:01:51 crc kubenswrapper[4752]: I0122 12:01:51.299021 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c01b187-2554-4bce-b3d4-154403015e11" containerName="keystone-cron" Jan 22 12:01:51 crc kubenswrapper[4752]: I0122 12:01:51.300632 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jfbtd" Jan 22 12:01:51 crc kubenswrapper[4752]: I0122 12:01:51.308553 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jfbtd"] Jan 22 12:01:51 crc kubenswrapper[4752]: I0122 12:01:51.347527 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96ce264a-9019-445a-8d8a-0e4804e267ec-catalog-content\") pod \"certified-operators-jfbtd\" (UID: \"96ce264a-9019-445a-8d8a-0e4804e267ec\") " pod="openshift-marketplace/certified-operators-jfbtd" Jan 22 12:01:51 crc kubenswrapper[4752]: I0122 12:01:51.347592 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96ce264a-9019-445a-8d8a-0e4804e267ec-utilities\") pod \"certified-operators-jfbtd\" (UID: \"96ce264a-9019-445a-8d8a-0e4804e267ec\") " pod="openshift-marketplace/certified-operators-jfbtd" Jan 22 12:01:51 crc kubenswrapper[4752]: I0122 12:01:51.348065 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq55x\" (UniqueName: \"kubernetes.io/projected/96ce264a-9019-445a-8d8a-0e4804e267ec-kube-api-access-dq55x\") pod \"certified-operators-jfbtd\" (UID: \"96ce264a-9019-445a-8d8a-0e4804e267ec\") " pod="openshift-marketplace/certified-operators-jfbtd" Jan 22 12:01:51 crc kubenswrapper[4752]: I0122 12:01:51.449751 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96ce264a-9019-445a-8d8a-0e4804e267ec-catalog-content\") pod \"certified-operators-jfbtd\" (UID: \"96ce264a-9019-445a-8d8a-0e4804e267ec\") " pod="openshift-marketplace/certified-operators-jfbtd" Jan 22 12:01:51 crc kubenswrapper[4752]: I0122 12:01:51.449816 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96ce264a-9019-445a-8d8a-0e4804e267ec-utilities\") pod \"certified-operators-jfbtd\" (UID: \"96ce264a-9019-445a-8d8a-0e4804e267ec\") " pod="openshift-marketplace/certified-operators-jfbtd" Jan 22 12:01:51 crc kubenswrapper[4752]: I0122 12:01:51.449978 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq55x\" (UniqueName: \"kubernetes.io/projected/96ce264a-9019-445a-8d8a-0e4804e267ec-kube-api-access-dq55x\") pod \"certified-operators-jfbtd\" (UID: \"96ce264a-9019-445a-8d8a-0e4804e267ec\") " pod="openshift-marketplace/certified-operators-jfbtd" Jan 22 12:01:51 crc kubenswrapper[4752]: I0122 12:01:51.450367 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96ce264a-9019-445a-8d8a-0e4804e267ec-catalog-content\") pod \"certified-operators-jfbtd\" (UID: \"96ce264a-9019-445a-8d8a-0e4804e267ec\") " pod="openshift-marketplace/certified-operators-jfbtd" Jan 22 12:01:51 crc kubenswrapper[4752]: I0122 12:01:51.450427 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96ce264a-9019-445a-8d8a-0e4804e267ec-utilities\") pod \"certified-operators-jfbtd\" (UID: \"96ce264a-9019-445a-8d8a-0e4804e267ec\") " pod="openshift-marketplace/certified-operators-jfbtd" Jan 22 12:01:51 crc kubenswrapper[4752]: I0122 12:01:51.472716 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq55x\" (UniqueName: \"kubernetes.io/projected/96ce264a-9019-445a-8d8a-0e4804e267ec-kube-api-access-dq55x\") pod \"certified-operators-jfbtd\" (UID: \"96ce264a-9019-445a-8d8a-0e4804e267ec\") " pod="openshift-marketplace/certified-operators-jfbtd" Jan 22 12:01:51 crc kubenswrapper[4752]: I0122 12:01:51.627235 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jfbtd" Jan 22 12:01:52 crc kubenswrapper[4752]: I0122 12:01:52.239307 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jfbtd"] Jan 22 12:01:52 crc kubenswrapper[4752]: W0122 12:01:52.253379 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96ce264a_9019_445a_8d8a_0e4804e267ec.slice/crio-dc38003afd4ca74a1d5eb3f51ec8c8b72fcb18936d2b216e03b3f0309c975eda WatchSource:0}: Error finding container dc38003afd4ca74a1d5eb3f51ec8c8b72fcb18936d2b216e03b3f0309c975eda: Status 404 returned error can't find the container with id dc38003afd4ca74a1d5eb3f51ec8c8b72fcb18936d2b216e03b3f0309c975eda Jan 22 12:01:53 crc kubenswrapper[4752]: I0122 12:01:53.174923 4752 generic.go:334] "Generic (PLEG): container finished" podID="96ce264a-9019-445a-8d8a-0e4804e267ec" containerID="58d3502bd145b9e1d8efb3d4eb040c92c7345798fdb034b951709673b8419120" exitCode=0 Jan 22 12:01:53 crc kubenswrapper[4752]: I0122 12:01:53.175038 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfbtd" event={"ID":"96ce264a-9019-445a-8d8a-0e4804e267ec","Type":"ContainerDied","Data":"58d3502bd145b9e1d8efb3d4eb040c92c7345798fdb034b951709673b8419120"} Jan 22 12:01:53 crc kubenswrapper[4752]: I0122 12:01:53.175229 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfbtd" event={"ID":"96ce264a-9019-445a-8d8a-0e4804e267ec","Type":"ContainerStarted","Data":"dc38003afd4ca74a1d5eb3f51ec8c8b72fcb18936d2b216e03b3f0309c975eda"} Jan 22 12:01:57 crc kubenswrapper[4752]: I0122 12:01:57.724045 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:01:57 crc kubenswrapper[4752]: I0122 12:01:57.725526 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:01:57 crc kubenswrapper[4752]: I0122 12:01:57.725633 4752 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 12:01:57 crc kubenswrapper[4752]: I0122 12:01:57.726447 4752 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"74ee9c42cf450447dce2eb7920a57efde1b27901b18fbef9d2c5637d0ec04e91"} pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:01:57 crc kubenswrapper[4752]: I0122 12:01:57.726593 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" containerID="cri-o://74ee9c42cf450447dce2eb7920a57efde1b27901b18fbef9d2c5637d0ec04e91" gracePeriod=600 Jan 22 12:02:02 crc kubenswrapper[4752]: I0122 12:02:02.273156 4752 generic.go:334] "Generic (PLEG): container finished" podID="eb8df70c-9474-4827-8831-f39fc6883d79" containerID="74ee9c42cf450447dce2eb7920a57efde1b27901b18fbef9d2c5637d0ec04e91" exitCode=0 Jan 22 12:02:02 crc kubenswrapper[4752]: I0122 12:02:02.273222 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerDied","Data":"74ee9c42cf450447dce2eb7920a57efde1b27901b18fbef9d2c5637d0ec04e91"} Jan 22 12:02:02 crc kubenswrapper[4752]: I0122 12:02:02.274013 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerStarted","Data":"9bdbe0e134edfe8a35f46a9a2371cbd31e79022e45936553737d53546c29d039"} Jan 22 12:02:02 crc kubenswrapper[4752]: I0122 12:02:02.274043 4752 scope.go:117] "RemoveContainer" containerID="d023cbf7eb4fc640c62a807be77c92bfa691962080a38d6f4b384116b06160a0" Jan 22 12:02:03 crc kubenswrapper[4752]: I0122 12:02:03.284217 4752 generic.go:334] "Generic (PLEG): container finished" podID="96ce264a-9019-445a-8d8a-0e4804e267ec" containerID="fd7a5790f33a74efe2ca626a533bdee0a75e34c0cc7ce50cee8335ffb264f124" exitCode=0 Jan 22 12:02:03 crc kubenswrapper[4752]: I0122 12:02:03.284264 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfbtd" event={"ID":"96ce264a-9019-445a-8d8a-0e4804e267ec","Type":"ContainerDied","Data":"fd7a5790f33a74efe2ca626a533bdee0a75e34c0cc7ce50cee8335ffb264f124"} Jan 22 12:02:04 crc kubenswrapper[4752]: I0122 12:02:04.301316 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfbtd" event={"ID":"96ce264a-9019-445a-8d8a-0e4804e267ec","Type":"ContainerStarted","Data":"3d0daa24c05b5d74068eea22c00ba50b453f633ecff6d288f1166eb8e96744a9"} Jan 22 12:02:04 crc kubenswrapper[4752]: I0122 12:02:04.328082 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jfbtd" podStartSLOduration=2.70321859 podStartE2EDuration="13.328062301s" podCreationTimestamp="2026-01-22 12:01:51 +0000 UTC" firstStartedPulling="2026-01-22 12:01:53.176617461 +0000 UTC m=+5792.406560369" lastFinishedPulling="2026-01-22 12:02:03.801461182 +0000 UTC m=+5803.031404080" observedRunningTime="2026-01-22 12:02:04.319116877 +0000 UTC m=+5803.549059785" watchObservedRunningTime="2026-01-22 12:02:04.328062301 +0000 UTC m=+5803.558005209" Jan 22 12:02:11 crc kubenswrapper[4752]: I0122 12:02:11.628470 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jfbtd" Jan 22 12:02:11 crc kubenswrapper[4752]: I0122 12:02:11.629081 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jfbtd" Jan 22 12:02:11 crc kubenswrapper[4752]: I0122 12:02:11.682035 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jfbtd" Jan 22 12:02:12 crc kubenswrapper[4752]: I0122 12:02:12.436087 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jfbtd" Jan 22 12:02:12 crc kubenswrapper[4752]: I0122 12:02:12.499886 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jfbtd"] Jan 22 12:02:12 crc kubenswrapper[4752]: I0122 12:02:12.546429 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cdmk8"] Jan 22 12:02:12 crc kubenswrapper[4752]: I0122 12:02:12.546704 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cdmk8" podUID="422c8d2a-f9fe-4807-88c3-874a4b062612" containerName="registry-server" containerID="cri-o://ef86f04979d35a84a83ae4139884bd428236d29914c17582231de1beda39dd8c" gracePeriod=2 Jan 22 12:02:13 crc kubenswrapper[4752]: I0122 12:02:13.405100 4752 generic.go:334] "Generic (PLEG): container finished" podID="422c8d2a-f9fe-4807-88c3-874a4b062612" containerID="ef86f04979d35a84a83ae4139884bd428236d29914c17582231de1beda39dd8c" exitCode=0 Jan 22 12:02:13 crc kubenswrapper[4752]: I0122 12:02:13.405208 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdmk8" event={"ID":"422c8d2a-f9fe-4807-88c3-874a4b062612","Type":"ContainerDied","Data":"ef86f04979d35a84a83ae4139884bd428236d29914c17582231de1beda39dd8c"} Jan 22 12:02:13 crc kubenswrapper[4752]: I0122 12:02:13.700886 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdmk8" Jan 22 12:02:13 crc kubenswrapper[4752]: I0122 12:02:13.833285 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/422c8d2a-f9fe-4807-88c3-874a4b062612-utilities\") pod \"422c8d2a-f9fe-4807-88c3-874a4b062612\" (UID: \"422c8d2a-f9fe-4807-88c3-874a4b062612\") " Jan 22 12:02:13 crc kubenswrapper[4752]: I0122 12:02:13.833727 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2lhq\" (UniqueName: \"kubernetes.io/projected/422c8d2a-f9fe-4807-88c3-874a4b062612-kube-api-access-m2lhq\") pod \"422c8d2a-f9fe-4807-88c3-874a4b062612\" (UID: \"422c8d2a-f9fe-4807-88c3-874a4b062612\") " Jan 22 12:02:13 crc kubenswrapper[4752]: I0122 12:02:13.833828 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/422c8d2a-f9fe-4807-88c3-874a4b062612-catalog-content\") pod \"422c8d2a-f9fe-4807-88c3-874a4b062612\" (UID: \"422c8d2a-f9fe-4807-88c3-874a4b062612\") " Jan 22 12:02:13 crc kubenswrapper[4752]: I0122 12:02:13.836995 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/422c8d2a-f9fe-4807-88c3-874a4b062612-utilities" (OuterVolumeSpecName: "utilities") pod "422c8d2a-f9fe-4807-88c3-874a4b062612" (UID: "422c8d2a-f9fe-4807-88c3-874a4b062612"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:02:13 crc kubenswrapper[4752]: I0122 12:02:13.843457 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/422c8d2a-f9fe-4807-88c3-874a4b062612-kube-api-access-m2lhq" (OuterVolumeSpecName: "kube-api-access-m2lhq") pod "422c8d2a-f9fe-4807-88c3-874a4b062612" (UID: "422c8d2a-f9fe-4807-88c3-874a4b062612"). InnerVolumeSpecName "kube-api-access-m2lhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:02:13 crc kubenswrapper[4752]: I0122 12:02:13.920142 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/422c8d2a-f9fe-4807-88c3-874a4b062612-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "422c8d2a-f9fe-4807-88c3-874a4b062612" (UID: "422c8d2a-f9fe-4807-88c3-874a4b062612"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:02:13 crc kubenswrapper[4752]: I0122 12:02:13.938368 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2lhq\" (UniqueName: \"kubernetes.io/projected/422c8d2a-f9fe-4807-88c3-874a4b062612-kube-api-access-m2lhq\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:13 crc kubenswrapper[4752]: I0122 12:02:13.938412 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/422c8d2a-f9fe-4807-88c3-874a4b062612-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:13 crc kubenswrapper[4752]: I0122 12:02:13.938422 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/422c8d2a-f9fe-4807-88c3-874a4b062612-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:14 crc kubenswrapper[4752]: I0122 12:02:14.418928 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdmk8" Jan 22 12:02:14 crc kubenswrapper[4752]: I0122 12:02:14.418943 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdmk8" event={"ID":"422c8d2a-f9fe-4807-88c3-874a4b062612","Type":"ContainerDied","Data":"c230ffa55525975f2fe672d6c468a92fdda7e8fd382dfa4d9b81d91bc858b594"} Jan 22 12:02:14 crc kubenswrapper[4752]: I0122 12:02:14.419320 4752 scope.go:117] "RemoveContainer" containerID="ef86f04979d35a84a83ae4139884bd428236d29914c17582231de1beda39dd8c" Jan 22 12:02:14 crc kubenswrapper[4752]: I0122 12:02:14.473449 4752 scope.go:117] "RemoveContainer" containerID="aabf104cce6c5c3fd2ac4c73c8abbd7c9c028c1ff79876f93a25fba80e2dd131" Jan 22 12:02:14 crc kubenswrapper[4752]: I0122 12:02:14.475702 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cdmk8"] Jan 22 12:02:14 crc kubenswrapper[4752]: I0122 12:02:14.484808 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cdmk8"] Jan 22 12:02:14 crc kubenswrapper[4752]: I0122 12:02:14.501123 4752 scope.go:117] "RemoveContainer" containerID="03ab387d66cef87224731aff0dfc2846d8fa9c2e23c04ea2cde42457d65ae945" Jan 22 12:02:15 crc kubenswrapper[4752]: I0122 12:02:15.110173 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="422c8d2a-f9fe-4807-88c3-874a4b062612" path="/var/lib/kubelet/pods/422c8d2a-f9fe-4807-88c3-874a4b062612/volumes" Jan 22 12:04:27 crc kubenswrapper[4752]: I0122 12:04:27.723846 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:04:27 crc kubenswrapper[4752]: I0122 12:04:27.724397 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:04:42 crc kubenswrapper[4752]: I0122 12:04:42.787446 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zqj6q"] Jan 22 12:04:42 crc kubenswrapper[4752]: E0122 12:04:42.789033 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="422c8d2a-f9fe-4807-88c3-874a4b062612" containerName="extract-content" Jan 22 12:04:42 crc kubenswrapper[4752]: I0122 12:04:42.789055 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="422c8d2a-f9fe-4807-88c3-874a4b062612" containerName="extract-content" Jan 22 12:04:42 crc kubenswrapper[4752]: E0122 12:04:42.789073 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="422c8d2a-f9fe-4807-88c3-874a4b062612" containerName="extract-utilities" Jan 22 12:04:42 crc kubenswrapper[4752]: I0122 12:04:42.789081 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="422c8d2a-f9fe-4807-88c3-874a4b062612" containerName="extract-utilities" Jan 22 12:04:42 crc kubenswrapper[4752]: E0122 12:04:42.789122 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="422c8d2a-f9fe-4807-88c3-874a4b062612" containerName="registry-server" Jan 22 12:04:42 crc kubenswrapper[4752]: I0122 12:04:42.789129 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="422c8d2a-f9fe-4807-88c3-874a4b062612" containerName="registry-server" Jan 22 12:04:42 crc kubenswrapper[4752]: I0122 12:04:42.789342 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="422c8d2a-f9fe-4807-88c3-874a4b062612" containerName="registry-server" Jan 22 12:04:42 crc kubenswrapper[4752]: I0122 12:04:42.791351 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqj6q" Jan 22 12:04:42 crc kubenswrapper[4752]: I0122 12:04:42.807408 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zqj6q"] Jan 22 12:04:42 crc kubenswrapper[4752]: I0122 12:04:42.885494 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12f37879-e0e8-46b8-bc00-7c5ecc4d1b31-catalog-content\") pod \"redhat-operators-zqj6q\" (UID: \"12f37879-e0e8-46b8-bc00-7c5ecc4d1b31\") " pod="openshift-marketplace/redhat-operators-zqj6q" Jan 22 12:04:42 crc kubenswrapper[4752]: I0122 12:04:42.885608 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzd8j\" (UniqueName: \"kubernetes.io/projected/12f37879-e0e8-46b8-bc00-7c5ecc4d1b31-kube-api-access-gzd8j\") pod \"redhat-operators-zqj6q\" (UID: \"12f37879-e0e8-46b8-bc00-7c5ecc4d1b31\") " pod="openshift-marketplace/redhat-operators-zqj6q" Jan 22 12:04:42 crc kubenswrapper[4752]: I0122 12:04:42.885680 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12f37879-e0e8-46b8-bc00-7c5ecc4d1b31-utilities\") pod \"redhat-operators-zqj6q\" (UID: \"12f37879-e0e8-46b8-bc00-7c5ecc4d1b31\") " pod="openshift-marketplace/redhat-operators-zqj6q" Jan 22 12:04:42 crc kubenswrapper[4752]: I0122 12:04:42.987740 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12f37879-e0e8-46b8-bc00-7c5ecc4d1b31-utilities\") pod \"redhat-operators-zqj6q\" (UID: \"12f37879-e0e8-46b8-bc00-7c5ecc4d1b31\") " pod="openshift-marketplace/redhat-operators-zqj6q" Jan 22 12:04:42 crc kubenswrapper[4752]: I0122 12:04:42.987909 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12f37879-e0e8-46b8-bc00-7c5ecc4d1b31-catalog-content\") pod \"redhat-operators-zqj6q\" (UID: \"12f37879-e0e8-46b8-bc00-7c5ecc4d1b31\") " pod="openshift-marketplace/redhat-operators-zqj6q" Jan 22 12:04:42 crc kubenswrapper[4752]: I0122 12:04:42.987966 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzd8j\" (UniqueName: \"kubernetes.io/projected/12f37879-e0e8-46b8-bc00-7c5ecc4d1b31-kube-api-access-gzd8j\") pod \"redhat-operators-zqj6q\" (UID: \"12f37879-e0e8-46b8-bc00-7c5ecc4d1b31\") " pod="openshift-marketplace/redhat-operators-zqj6q" Jan 22 12:04:42 crc kubenswrapper[4752]: I0122 12:04:42.988376 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12f37879-e0e8-46b8-bc00-7c5ecc4d1b31-utilities\") pod \"redhat-operators-zqj6q\" (UID: \"12f37879-e0e8-46b8-bc00-7c5ecc4d1b31\") " pod="openshift-marketplace/redhat-operators-zqj6q" Jan 22 12:04:42 crc kubenswrapper[4752]: I0122 12:04:42.988624 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12f37879-e0e8-46b8-bc00-7c5ecc4d1b31-catalog-content\") pod \"redhat-operators-zqj6q\" (UID: \"12f37879-e0e8-46b8-bc00-7c5ecc4d1b31\") " pod="openshift-marketplace/redhat-operators-zqj6q" Jan 22 12:04:43 crc kubenswrapper[4752]: I0122 12:04:43.011182 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzd8j\" (UniqueName: \"kubernetes.io/projected/12f37879-e0e8-46b8-bc00-7c5ecc4d1b31-kube-api-access-gzd8j\") pod \"redhat-operators-zqj6q\" (UID: \"12f37879-e0e8-46b8-bc00-7c5ecc4d1b31\") " pod="openshift-marketplace/redhat-operators-zqj6q" Jan 22 12:04:43 crc kubenswrapper[4752]: I0122 12:04:43.118674 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqj6q" Jan 22 12:04:43 crc kubenswrapper[4752]: I0122 12:04:43.678483 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zqj6q"] Jan 22 12:04:44 crc kubenswrapper[4752]: I0122 12:04:44.176080 4752 generic.go:334] "Generic (PLEG): container finished" podID="12f37879-e0e8-46b8-bc00-7c5ecc4d1b31" containerID="6d17799914a58c8c6dfb1c7739d0174f62c5e205679cb15dec32222e47bd1ae8" exitCode=0 Jan 22 12:04:44 crc kubenswrapper[4752]: I0122 12:04:44.176328 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqj6q" event={"ID":"12f37879-e0e8-46b8-bc00-7c5ecc4d1b31","Type":"ContainerDied","Data":"6d17799914a58c8c6dfb1c7739d0174f62c5e205679cb15dec32222e47bd1ae8"} Jan 22 12:04:44 crc kubenswrapper[4752]: I0122 12:04:44.176439 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqj6q" event={"ID":"12f37879-e0e8-46b8-bc00-7c5ecc4d1b31","Type":"ContainerStarted","Data":"de129fc894db513b7ac055945cd0a36bfa4b4adb23e8c7181bc4bc57a958360a"} Jan 22 12:04:44 crc kubenswrapper[4752]: I0122 12:04:44.179066 4752 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 12:04:46 crc kubenswrapper[4752]: I0122 12:04:46.199947 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqj6q" event={"ID":"12f37879-e0e8-46b8-bc00-7c5ecc4d1b31","Type":"ContainerStarted","Data":"27c8559b8ebd7e258a991e6231c008da997afd7d496e881a783b05209e35b692"} Jan 22 12:04:48 crc kubenswrapper[4752]: E0122 12:04:48.092811 4752 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12f37879_e0e8_46b8_bc00_7c5ecc4d1b31.slice/crio-27c8559b8ebd7e258a991e6231c008da997afd7d496e881a783b05209e35b692.scope\": RecentStats: unable to find data in memory cache]" Jan 22 12:04:48 crc kubenswrapper[4752]: I0122 12:04:48.234806 4752 generic.go:334] "Generic (PLEG): container finished" podID="12f37879-e0e8-46b8-bc00-7c5ecc4d1b31" containerID="27c8559b8ebd7e258a991e6231c008da997afd7d496e881a783b05209e35b692" exitCode=0 Jan 22 12:04:48 crc kubenswrapper[4752]: I0122 12:04:48.234923 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqj6q" event={"ID":"12f37879-e0e8-46b8-bc00-7c5ecc4d1b31","Type":"ContainerDied","Data":"27c8559b8ebd7e258a991e6231c008da997afd7d496e881a783b05209e35b692"} Jan 22 12:04:51 crc kubenswrapper[4752]: I0122 12:04:51.268664 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqj6q" event={"ID":"12f37879-e0e8-46b8-bc00-7c5ecc4d1b31","Type":"ContainerStarted","Data":"ae2f9ce1e9ad936a9b7272c7df9513215bf761956fb3017720425b148395d802"} Jan 22 12:04:53 crc kubenswrapper[4752]: I0122 12:04:53.121204 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zqj6q" Jan 22 12:04:53 crc kubenswrapper[4752]: I0122 12:04:53.121582 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zqj6q" Jan 22 12:04:54 crc kubenswrapper[4752]: I0122 12:04:54.176953 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zqj6q" podUID="12f37879-e0e8-46b8-bc00-7c5ecc4d1b31" containerName="registry-server" probeResult="failure" output=< Jan 22 12:04:54 crc kubenswrapper[4752]: timeout: failed to connect service ":50051" within 1s Jan 22 12:04:54 crc kubenswrapper[4752]: > Jan 22 12:04:57 crc kubenswrapper[4752]: I0122 12:04:57.723484 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:04:57 crc kubenswrapper[4752]: I0122 12:04:57.724240 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:05:03 crc kubenswrapper[4752]: I0122 12:05:03.171763 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zqj6q" Jan 22 12:05:03 crc kubenswrapper[4752]: I0122 12:05:03.193374 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zqj6q" podStartSLOduration=15.196653279 podStartE2EDuration="21.19335661s" podCreationTimestamp="2026-01-22 12:04:42 +0000 UTC" firstStartedPulling="2026-01-22 12:04:44.178730542 +0000 UTC m=+5963.408673450" lastFinishedPulling="2026-01-22 12:04:50.175433833 +0000 UTC m=+5969.405376781" observedRunningTime="2026-01-22 12:04:51.295540171 +0000 UTC m=+5970.525483089" watchObservedRunningTime="2026-01-22 12:05:03.19335661 +0000 UTC m=+5982.423299518" Jan 22 12:05:03 crc kubenswrapper[4752]: I0122 12:05:03.231570 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zqj6q" Jan 22 12:05:03 crc kubenswrapper[4752]: I0122 12:05:03.411344 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zqj6q"] Jan 22 12:05:04 crc kubenswrapper[4752]: I0122 12:05:04.390145 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zqj6q" podUID="12f37879-e0e8-46b8-bc00-7c5ecc4d1b31" containerName="registry-server" containerID="cri-o://ae2f9ce1e9ad936a9b7272c7df9513215bf761956fb3017720425b148395d802" gracePeriod=2 Jan 22 12:05:04 crc kubenswrapper[4752]: I0122 12:05:04.887574 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqj6q" Jan 22 12:05:05 crc kubenswrapper[4752]: I0122 12:05:05.019082 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12f37879-e0e8-46b8-bc00-7c5ecc4d1b31-catalog-content\") pod \"12f37879-e0e8-46b8-bc00-7c5ecc4d1b31\" (UID: \"12f37879-e0e8-46b8-bc00-7c5ecc4d1b31\") " Jan 22 12:05:05 crc kubenswrapper[4752]: I0122 12:05:05.019291 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzd8j\" (UniqueName: \"kubernetes.io/projected/12f37879-e0e8-46b8-bc00-7c5ecc4d1b31-kube-api-access-gzd8j\") pod \"12f37879-e0e8-46b8-bc00-7c5ecc4d1b31\" (UID: \"12f37879-e0e8-46b8-bc00-7c5ecc4d1b31\") " Jan 22 12:05:05 crc kubenswrapper[4752]: I0122 12:05:05.019499 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12f37879-e0e8-46b8-bc00-7c5ecc4d1b31-utilities\") pod \"12f37879-e0e8-46b8-bc00-7c5ecc4d1b31\" (UID: \"12f37879-e0e8-46b8-bc00-7c5ecc4d1b31\") " Jan 22 12:05:05 crc kubenswrapper[4752]: I0122 12:05:05.020628 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12f37879-e0e8-46b8-bc00-7c5ecc4d1b31-utilities" (OuterVolumeSpecName: "utilities") pod "12f37879-e0e8-46b8-bc00-7c5ecc4d1b31" (UID: "12f37879-e0e8-46b8-bc00-7c5ecc4d1b31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:05:05 crc kubenswrapper[4752]: I0122 12:05:05.033227 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12f37879-e0e8-46b8-bc00-7c5ecc4d1b31-kube-api-access-gzd8j" (OuterVolumeSpecName: "kube-api-access-gzd8j") pod "12f37879-e0e8-46b8-bc00-7c5ecc4d1b31" (UID: "12f37879-e0e8-46b8-bc00-7c5ecc4d1b31"). InnerVolumeSpecName "kube-api-access-gzd8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:05:05 crc kubenswrapper[4752]: I0122 12:05:05.122631 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzd8j\" (UniqueName: \"kubernetes.io/projected/12f37879-e0e8-46b8-bc00-7c5ecc4d1b31-kube-api-access-gzd8j\") on node \"crc\" DevicePath \"\"" Jan 22 12:05:05 crc kubenswrapper[4752]: I0122 12:05:05.122670 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12f37879-e0e8-46b8-bc00-7c5ecc4d1b31-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:05:05 crc kubenswrapper[4752]: I0122 12:05:05.157193 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12f37879-e0e8-46b8-bc00-7c5ecc4d1b31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12f37879-e0e8-46b8-bc00-7c5ecc4d1b31" (UID: "12f37879-e0e8-46b8-bc00-7c5ecc4d1b31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:05:05 crc kubenswrapper[4752]: I0122 12:05:05.225115 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12f37879-e0e8-46b8-bc00-7c5ecc4d1b31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:05:05 crc kubenswrapper[4752]: I0122 12:05:05.402252 4752 generic.go:334] "Generic (PLEG): container finished" podID="12f37879-e0e8-46b8-bc00-7c5ecc4d1b31" containerID="ae2f9ce1e9ad936a9b7272c7df9513215bf761956fb3017720425b148395d802" exitCode=0 Jan 22 12:05:05 crc kubenswrapper[4752]: I0122 12:05:05.402319 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqj6q" Jan 22 12:05:05 crc kubenswrapper[4752]: I0122 12:05:05.402363 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqj6q" event={"ID":"12f37879-e0e8-46b8-bc00-7c5ecc4d1b31","Type":"ContainerDied","Data":"ae2f9ce1e9ad936a9b7272c7df9513215bf761956fb3017720425b148395d802"} Jan 22 12:05:05 crc kubenswrapper[4752]: I0122 12:05:05.403562 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqj6q" event={"ID":"12f37879-e0e8-46b8-bc00-7c5ecc4d1b31","Type":"ContainerDied","Data":"de129fc894db513b7ac055945cd0a36bfa4b4adb23e8c7181bc4bc57a958360a"} Jan 22 12:05:05 crc kubenswrapper[4752]: I0122 12:05:05.403590 4752 scope.go:117] "RemoveContainer" containerID="ae2f9ce1e9ad936a9b7272c7df9513215bf761956fb3017720425b148395d802" Jan 22 12:05:05 crc kubenswrapper[4752]: I0122 12:05:05.441126 4752 scope.go:117] "RemoveContainer" containerID="27c8559b8ebd7e258a991e6231c008da997afd7d496e881a783b05209e35b692" Jan 22 12:05:05 crc kubenswrapper[4752]: I0122 12:05:05.441852 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zqj6q"] Jan 22 12:05:05 crc kubenswrapper[4752]: I0122 12:05:05.458927 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zqj6q"] Jan 22 12:05:05 crc kubenswrapper[4752]: I0122 12:05:05.463082 4752 scope.go:117] "RemoveContainer" containerID="6d17799914a58c8c6dfb1c7739d0174f62c5e205679cb15dec32222e47bd1ae8" Jan 22 12:05:05 crc kubenswrapper[4752]: I0122 12:05:05.505385 4752 scope.go:117] "RemoveContainer" containerID="ae2f9ce1e9ad936a9b7272c7df9513215bf761956fb3017720425b148395d802" Jan 22 12:05:05 crc kubenswrapper[4752]: E0122 12:05:05.505871 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae2f9ce1e9ad936a9b7272c7df9513215bf761956fb3017720425b148395d802\": container with ID starting with ae2f9ce1e9ad936a9b7272c7df9513215bf761956fb3017720425b148395d802 not found: ID does not exist" containerID="ae2f9ce1e9ad936a9b7272c7df9513215bf761956fb3017720425b148395d802" Jan 22 12:05:05 crc kubenswrapper[4752]: I0122 12:05:05.505980 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae2f9ce1e9ad936a9b7272c7df9513215bf761956fb3017720425b148395d802"} err="failed to get container status \"ae2f9ce1e9ad936a9b7272c7df9513215bf761956fb3017720425b148395d802\": rpc error: code = NotFound desc = could not find container \"ae2f9ce1e9ad936a9b7272c7df9513215bf761956fb3017720425b148395d802\": container with ID starting with ae2f9ce1e9ad936a9b7272c7df9513215bf761956fb3017720425b148395d802 not found: ID does not exist" Jan 22 12:05:05 crc kubenswrapper[4752]: I0122 12:05:05.506061 4752 scope.go:117] "RemoveContainer" containerID="27c8559b8ebd7e258a991e6231c008da997afd7d496e881a783b05209e35b692" Jan 22 12:05:05 crc kubenswrapper[4752]: E0122 12:05:05.506442 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27c8559b8ebd7e258a991e6231c008da997afd7d496e881a783b05209e35b692\": container with ID starting with 27c8559b8ebd7e258a991e6231c008da997afd7d496e881a783b05209e35b692 not found: ID does not exist" containerID="27c8559b8ebd7e258a991e6231c008da997afd7d496e881a783b05209e35b692" Jan 22 12:05:05 crc kubenswrapper[4752]: I0122 12:05:05.506461 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27c8559b8ebd7e258a991e6231c008da997afd7d496e881a783b05209e35b692"} err="failed to get container status \"27c8559b8ebd7e258a991e6231c008da997afd7d496e881a783b05209e35b692\": rpc error: code = NotFound desc = could not find container \"27c8559b8ebd7e258a991e6231c008da997afd7d496e881a783b05209e35b692\": container with ID starting with 27c8559b8ebd7e258a991e6231c008da997afd7d496e881a783b05209e35b692 not found: ID does not exist" Jan 22 12:05:05 crc kubenswrapper[4752]: I0122 12:05:05.506475 4752 scope.go:117] "RemoveContainer" containerID="6d17799914a58c8c6dfb1c7739d0174f62c5e205679cb15dec32222e47bd1ae8" Jan 22 12:05:05 crc kubenswrapper[4752]: E0122 12:05:05.506884 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d17799914a58c8c6dfb1c7739d0174f62c5e205679cb15dec32222e47bd1ae8\": container with ID starting with 6d17799914a58c8c6dfb1c7739d0174f62c5e205679cb15dec32222e47bd1ae8 not found: ID does not exist" containerID="6d17799914a58c8c6dfb1c7739d0174f62c5e205679cb15dec32222e47bd1ae8" Jan 22 12:05:05 crc kubenswrapper[4752]: I0122 12:05:05.506903 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d17799914a58c8c6dfb1c7739d0174f62c5e205679cb15dec32222e47bd1ae8"} err="failed to get container status \"6d17799914a58c8c6dfb1c7739d0174f62c5e205679cb15dec32222e47bd1ae8\": rpc error: code = NotFound desc = could not find container \"6d17799914a58c8c6dfb1c7739d0174f62c5e205679cb15dec32222e47bd1ae8\": container with ID starting with 6d17799914a58c8c6dfb1c7739d0174f62c5e205679cb15dec32222e47bd1ae8 not found: ID does not exist" Jan 22 12:05:07 crc kubenswrapper[4752]: I0122 12:05:07.110779 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12f37879-e0e8-46b8-bc00-7c5ecc4d1b31" path="/var/lib/kubelet/pods/12f37879-e0e8-46b8-bc00-7c5ecc4d1b31/volumes" Jan 22 12:05:26 crc kubenswrapper[4752]: I0122 12:05:26.948953 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jkkdm"] Jan 22 12:05:26 crc kubenswrapper[4752]: E0122 12:05:26.949989 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12f37879-e0e8-46b8-bc00-7c5ecc4d1b31" containerName="extract-content" Jan 22 12:05:26 crc kubenswrapper[4752]: I0122 12:05:26.950007 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="12f37879-e0e8-46b8-bc00-7c5ecc4d1b31" containerName="extract-content" Jan 22 12:05:26 crc kubenswrapper[4752]: E0122 12:05:26.950026 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12f37879-e0e8-46b8-bc00-7c5ecc4d1b31" containerName="extract-utilities" Jan 22 12:05:26 crc kubenswrapper[4752]: I0122 12:05:26.950035 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="12f37879-e0e8-46b8-bc00-7c5ecc4d1b31" containerName="extract-utilities" Jan 22 12:05:26 crc kubenswrapper[4752]: E0122 12:05:26.950061 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12f37879-e0e8-46b8-bc00-7c5ecc4d1b31" containerName="registry-server" Jan 22 12:05:26 crc kubenswrapper[4752]: I0122 12:05:26.950070 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="12f37879-e0e8-46b8-bc00-7c5ecc4d1b31" containerName="registry-server" Jan 22 12:05:26 crc kubenswrapper[4752]: I0122 12:05:26.950325 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="12f37879-e0e8-46b8-bc00-7c5ecc4d1b31" containerName="registry-server" Jan 22 12:05:26 crc kubenswrapper[4752]: I0122 12:05:26.952216 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jkkdm" Jan 22 12:05:26 crc kubenswrapper[4752]: I0122 12:05:26.995252 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jkkdm"] Jan 22 12:05:27 crc kubenswrapper[4752]: I0122 12:05:27.114139 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpnr5\" (UniqueName: \"kubernetes.io/projected/de06ca95-1d55-444d-b6ef-bec7a1b9127e-kube-api-access-mpnr5\") pod \"community-operators-jkkdm\" (UID: \"de06ca95-1d55-444d-b6ef-bec7a1b9127e\") " pod="openshift-marketplace/community-operators-jkkdm" Jan 22 12:05:27 crc kubenswrapper[4752]: I0122 12:05:27.114405 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de06ca95-1d55-444d-b6ef-bec7a1b9127e-catalog-content\") pod \"community-operators-jkkdm\" (UID: \"de06ca95-1d55-444d-b6ef-bec7a1b9127e\") " pod="openshift-marketplace/community-operators-jkkdm" Jan 22 12:05:27 crc kubenswrapper[4752]: I0122 12:05:27.114530 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de06ca95-1d55-444d-b6ef-bec7a1b9127e-utilities\") pod \"community-operators-jkkdm\" (UID: \"de06ca95-1d55-444d-b6ef-bec7a1b9127e\") " pod="openshift-marketplace/community-operators-jkkdm" Jan 22 12:05:27 crc kubenswrapper[4752]: I0122 12:05:27.216644 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpnr5\" (UniqueName: \"kubernetes.io/projected/de06ca95-1d55-444d-b6ef-bec7a1b9127e-kube-api-access-mpnr5\") pod \"community-operators-jkkdm\" (UID: \"de06ca95-1d55-444d-b6ef-bec7a1b9127e\") " pod="openshift-marketplace/community-operators-jkkdm" Jan 22 12:05:27 crc kubenswrapper[4752]: I0122 12:05:27.216958 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de06ca95-1d55-444d-b6ef-bec7a1b9127e-catalog-content\") pod \"community-operators-jkkdm\" (UID: \"de06ca95-1d55-444d-b6ef-bec7a1b9127e\") " pod="openshift-marketplace/community-operators-jkkdm" Jan 22 12:05:27 crc kubenswrapper[4752]: I0122 12:05:27.217120 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de06ca95-1d55-444d-b6ef-bec7a1b9127e-utilities\") pod \"community-operators-jkkdm\" (UID: \"de06ca95-1d55-444d-b6ef-bec7a1b9127e\") " pod="openshift-marketplace/community-operators-jkkdm" Jan 22 12:05:27 crc kubenswrapper[4752]: I0122 12:05:27.217574 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de06ca95-1d55-444d-b6ef-bec7a1b9127e-catalog-content\") pod \"community-operators-jkkdm\" (UID: \"de06ca95-1d55-444d-b6ef-bec7a1b9127e\") " pod="openshift-marketplace/community-operators-jkkdm" Jan 22 12:05:27 crc kubenswrapper[4752]: I0122 12:05:27.217709 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de06ca95-1d55-444d-b6ef-bec7a1b9127e-utilities\") pod \"community-operators-jkkdm\" (UID: \"de06ca95-1d55-444d-b6ef-bec7a1b9127e\") " pod="openshift-marketplace/community-operators-jkkdm" Jan 22 12:05:27 crc kubenswrapper[4752]: I0122 12:05:27.235981 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpnr5\" (UniqueName: \"kubernetes.io/projected/de06ca95-1d55-444d-b6ef-bec7a1b9127e-kube-api-access-mpnr5\") pod \"community-operators-jkkdm\" (UID: \"de06ca95-1d55-444d-b6ef-bec7a1b9127e\") " pod="openshift-marketplace/community-operators-jkkdm" Jan 22 12:05:27 crc kubenswrapper[4752]: I0122 12:05:27.314628 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jkkdm" Jan 22 12:05:27 crc kubenswrapper[4752]: I0122 12:05:27.723759 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:05:27 crc kubenswrapper[4752]: I0122 12:05:27.724088 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:05:27 crc kubenswrapper[4752]: I0122 12:05:27.724131 4752 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 12:05:27 crc kubenswrapper[4752]: I0122 12:05:27.725106 4752 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9bdbe0e134edfe8a35f46a9a2371cbd31e79022e45936553737d53546c29d039"} pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:05:27 crc kubenswrapper[4752]: I0122 12:05:27.725166 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" containerID="cri-o://9bdbe0e134edfe8a35f46a9a2371cbd31e79022e45936553737d53546c29d039" gracePeriod=600 Jan 22 12:05:27 crc kubenswrapper[4752]: E0122 12:05:27.860877 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:05:27 crc kubenswrapper[4752]: I0122 12:05:27.869692 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jkkdm"] Jan 22 12:05:28 crc kubenswrapper[4752]: I0122 12:05:28.651293 4752 generic.go:334] "Generic (PLEG): container finished" podID="eb8df70c-9474-4827-8831-f39fc6883d79" containerID="9bdbe0e134edfe8a35f46a9a2371cbd31e79022e45936553737d53546c29d039" exitCode=0 Jan 22 12:05:28 crc kubenswrapper[4752]: I0122 12:05:28.651381 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerDied","Data":"9bdbe0e134edfe8a35f46a9a2371cbd31e79022e45936553737d53546c29d039"} Jan 22 12:05:28 crc kubenswrapper[4752]: I0122 12:05:28.651627 4752 scope.go:117] "RemoveContainer" containerID="74ee9c42cf450447dce2eb7920a57efde1b27901b18fbef9d2c5637d0ec04e91" Jan 22 12:05:28 crc kubenswrapper[4752]: I0122 12:05:28.652403 4752 scope.go:117] "RemoveContainer" containerID="9bdbe0e134edfe8a35f46a9a2371cbd31e79022e45936553737d53546c29d039" Jan 22 12:05:28 crc kubenswrapper[4752]: E0122 12:05:28.652777 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:05:28 crc kubenswrapper[4752]: I0122 12:05:28.653671 4752 generic.go:334] "Generic (PLEG): container finished" podID="de06ca95-1d55-444d-b6ef-bec7a1b9127e" containerID="075df987b57377fcd90f92dfb2ebcd7e057ba2247b47f4e158940f07c724daa5" exitCode=0 Jan 22 12:05:28 crc kubenswrapper[4752]: I0122 12:05:28.653716 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jkkdm" event={"ID":"de06ca95-1d55-444d-b6ef-bec7a1b9127e","Type":"ContainerDied","Data":"075df987b57377fcd90f92dfb2ebcd7e057ba2247b47f4e158940f07c724daa5"} Jan 22 12:05:28 crc kubenswrapper[4752]: I0122 12:05:28.653743 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jkkdm" event={"ID":"de06ca95-1d55-444d-b6ef-bec7a1b9127e","Type":"ContainerStarted","Data":"289707ad9148e2453253219877cc0d0c21349d36c5d033d0b9535ea9d5db2878"} Jan 22 12:05:30 crc kubenswrapper[4752]: I0122 12:05:30.680169 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jkkdm" event={"ID":"de06ca95-1d55-444d-b6ef-bec7a1b9127e","Type":"ContainerStarted","Data":"34f77948392b88fb4acefb5f56499a97e861d17b70cb0642a04708c2b7fc86a1"} Jan 22 12:05:31 crc kubenswrapper[4752]: I0122 12:05:31.693148 4752 generic.go:334] "Generic (PLEG): container finished" podID="de06ca95-1d55-444d-b6ef-bec7a1b9127e" containerID="34f77948392b88fb4acefb5f56499a97e861d17b70cb0642a04708c2b7fc86a1" exitCode=0 Jan 22 12:05:31 crc kubenswrapper[4752]: I0122 12:05:31.693191 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jkkdm" event={"ID":"de06ca95-1d55-444d-b6ef-bec7a1b9127e","Type":"ContainerDied","Data":"34f77948392b88fb4acefb5f56499a97e861d17b70cb0642a04708c2b7fc86a1"} Jan 22 12:05:32 crc kubenswrapper[4752]: I0122 12:05:32.705340 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jkkdm" event={"ID":"de06ca95-1d55-444d-b6ef-bec7a1b9127e","Type":"ContainerStarted","Data":"72afac62169fdaae468a59c79c12addea5014f1672e85917161a3f7527ab3856"} Jan 22 12:05:32 crc kubenswrapper[4752]: I0122 12:05:32.725383 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jkkdm" podStartSLOduration=3.249302362 podStartE2EDuration="6.725364527s" podCreationTimestamp="2026-01-22 12:05:26 +0000 UTC" firstStartedPulling="2026-01-22 12:05:28.656610844 +0000 UTC m=+6007.886553752" lastFinishedPulling="2026-01-22 12:05:32.132673009 +0000 UTC m=+6011.362615917" observedRunningTime="2026-01-22 12:05:32.72469801 +0000 UTC m=+6011.954640928" watchObservedRunningTime="2026-01-22 12:05:32.725364527 +0000 UTC m=+6011.955307435" Jan 22 12:05:37 crc kubenswrapper[4752]: I0122 12:05:37.315726 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jkkdm" Jan 22 12:05:37 crc kubenswrapper[4752]: I0122 12:05:37.316265 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jkkdm" Jan 22 12:05:37 crc kubenswrapper[4752]: I0122 12:05:37.363698 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jkkdm" Jan 22 12:05:37 crc kubenswrapper[4752]: I0122 12:05:37.802294 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jkkdm" Jan 22 12:05:37 crc kubenswrapper[4752]: I0122 12:05:37.853556 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jkkdm"] Jan 22 12:05:39 crc kubenswrapper[4752]: I0122 12:05:39.777108 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jkkdm" podUID="de06ca95-1d55-444d-b6ef-bec7a1b9127e" containerName="registry-server" containerID="cri-o://72afac62169fdaae468a59c79c12addea5014f1672e85917161a3f7527ab3856" gracePeriod=2 Jan 22 12:05:40 crc kubenswrapper[4752]: I0122 12:05:40.098715 4752 scope.go:117] "RemoveContainer" containerID="9bdbe0e134edfe8a35f46a9a2371cbd31e79022e45936553737d53546c29d039" Jan 22 12:05:40 crc kubenswrapper[4752]: E0122 12:05:40.099351 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:05:40 crc kubenswrapper[4752]: I0122 12:05:40.247723 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jkkdm" Jan 22 12:05:40 crc kubenswrapper[4752]: I0122 12:05:40.406679 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpnr5\" (UniqueName: \"kubernetes.io/projected/de06ca95-1d55-444d-b6ef-bec7a1b9127e-kube-api-access-mpnr5\") pod \"de06ca95-1d55-444d-b6ef-bec7a1b9127e\" (UID: \"de06ca95-1d55-444d-b6ef-bec7a1b9127e\") " Jan 22 12:05:40 crc kubenswrapper[4752]: I0122 12:05:40.407209 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de06ca95-1d55-444d-b6ef-bec7a1b9127e-utilities\") pod \"de06ca95-1d55-444d-b6ef-bec7a1b9127e\" (UID: \"de06ca95-1d55-444d-b6ef-bec7a1b9127e\") " Jan 22 12:05:40 crc kubenswrapper[4752]: I0122 12:05:40.407293 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de06ca95-1d55-444d-b6ef-bec7a1b9127e-catalog-content\") pod \"de06ca95-1d55-444d-b6ef-bec7a1b9127e\" (UID: \"de06ca95-1d55-444d-b6ef-bec7a1b9127e\") " Jan 22 12:05:40 crc kubenswrapper[4752]: I0122 12:05:40.408366 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de06ca95-1d55-444d-b6ef-bec7a1b9127e-utilities" (OuterVolumeSpecName: "utilities") pod "de06ca95-1d55-444d-b6ef-bec7a1b9127e" (UID: "de06ca95-1d55-444d-b6ef-bec7a1b9127e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:05:40 crc kubenswrapper[4752]: I0122 12:05:40.419123 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de06ca95-1d55-444d-b6ef-bec7a1b9127e-kube-api-access-mpnr5" (OuterVolumeSpecName: "kube-api-access-mpnr5") pod "de06ca95-1d55-444d-b6ef-bec7a1b9127e" (UID: "de06ca95-1d55-444d-b6ef-bec7a1b9127e"). InnerVolumeSpecName "kube-api-access-mpnr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:05:40 crc kubenswrapper[4752]: I0122 12:05:40.476573 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de06ca95-1d55-444d-b6ef-bec7a1b9127e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de06ca95-1d55-444d-b6ef-bec7a1b9127e" (UID: "de06ca95-1d55-444d-b6ef-bec7a1b9127e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:05:40 crc kubenswrapper[4752]: I0122 12:05:40.509877 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de06ca95-1d55-444d-b6ef-bec7a1b9127e-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:05:40 crc kubenswrapper[4752]: I0122 12:05:40.509912 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de06ca95-1d55-444d-b6ef-bec7a1b9127e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:05:40 crc kubenswrapper[4752]: I0122 12:05:40.509950 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpnr5\" (UniqueName: \"kubernetes.io/projected/de06ca95-1d55-444d-b6ef-bec7a1b9127e-kube-api-access-mpnr5\") on node \"crc\" DevicePath \"\"" Jan 22 12:05:40 crc kubenswrapper[4752]: I0122 12:05:40.794032 4752 generic.go:334] "Generic (PLEG): container finished" podID="de06ca95-1d55-444d-b6ef-bec7a1b9127e" containerID="72afac62169fdaae468a59c79c12addea5014f1672e85917161a3f7527ab3856" exitCode=0 Jan 22 12:05:40 crc kubenswrapper[4752]: I0122 12:05:40.794048 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jkkdm" Jan 22 12:05:40 crc kubenswrapper[4752]: I0122 12:05:40.794090 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jkkdm" event={"ID":"de06ca95-1d55-444d-b6ef-bec7a1b9127e","Type":"ContainerDied","Data":"72afac62169fdaae468a59c79c12addea5014f1672e85917161a3f7527ab3856"} Jan 22 12:05:40 crc kubenswrapper[4752]: I0122 12:05:40.794698 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jkkdm" event={"ID":"de06ca95-1d55-444d-b6ef-bec7a1b9127e","Type":"ContainerDied","Data":"289707ad9148e2453253219877cc0d0c21349d36c5d033d0b9535ea9d5db2878"} Jan 22 12:05:40 crc kubenswrapper[4752]: I0122 12:05:40.794774 4752 scope.go:117] "RemoveContainer" containerID="72afac62169fdaae468a59c79c12addea5014f1672e85917161a3f7527ab3856" Jan 22 12:05:40 crc kubenswrapper[4752]: I0122 12:05:40.817147 4752 scope.go:117] "RemoveContainer" containerID="34f77948392b88fb4acefb5f56499a97e861d17b70cb0642a04708c2b7fc86a1" Jan 22 12:05:40 crc kubenswrapper[4752]: I0122 12:05:40.836878 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jkkdm"] Jan 22 12:05:40 crc kubenswrapper[4752]: I0122 12:05:40.853394 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jkkdm"] Jan 22 12:05:40 crc kubenswrapper[4752]: I0122 12:05:40.856746 4752 scope.go:117] "RemoveContainer" containerID="075df987b57377fcd90f92dfb2ebcd7e057ba2247b47f4e158940f07c724daa5" Jan 22 12:05:40 crc kubenswrapper[4752]: I0122 12:05:40.895932 4752 scope.go:117] "RemoveContainer" containerID="72afac62169fdaae468a59c79c12addea5014f1672e85917161a3f7527ab3856" Jan 22 12:05:40 crc kubenswrapper[4752]: E0122 12:05:40.896231 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72afac62169fdaae468a59c79c12addea5014f1672e85917161a3f7527ab3856\": container with ID starting with 72afac62169fdaae468a59c79c12addea5014f1672e85917161a3f7527ab3856 not found: ID does not exist" containerID="72afac62169fdaae468a59c79c12addea5014f1672e85917161a3f7527ab3856" Jan 22 12:05:40 crc kubenswrapper[4752]: I0122 12:05:40.896276 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72afac62169fdaae468a59c79c12addea5014f1672e85917161a3f7527ab3856"} err="failed to get container status \"72afac62169fdaae468a59c79c12addea5014f1672e85917161a3f7527ab3856\": rpc error: code = NotFound desc = could not find container \"72afac62169fdaae468a59c79c12addea5014f1672e85917161a3f7527ab3856\": container with ID starting with 72afac62169fdaae468a59c79c12addea5014f1672e85917161a3f7527ab3856 not found: ID does not exist" Jan 22 12:05:40 crc kubenswrapper[4752]: I0122 12:05:40.896299 4752 scope.go:117] "RemoveContainer" containerID="34f77948392b88fb4acefb5f56499a97e861d17b70cb0642a04708c2b7fc86a1" Jan 22 12:05:40 crc kubenswrapper[4752]: E0122 12:05:40.896535 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34f77948392b88fb4acefb5f56499a97e861d17b70cb0642a04708c2b7fc86a1\": container with ID starting with 34f77948392b88fb4acefb5f56499a97e861d17b70cb0642a04708c2b7fc86a1 not found: ID does not exist" containerID="34f77948392b88fb4acefb5f56499a97e861d17b70cb0642a04708c2b7fc86a1" Jan 22 12:05:40 crc kubenswrapper[4752]: I0122 12:05:40.896580 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34f77948392b88fb4acefb5f56499a97e861d17b70cb0642a04708c2b7fc86a1"} err="failed to get container status \"34f77948392b88fb4acefb5f56499a97e861d17b70cb0642a04708c2b7fc86a1\": rpc error: code = NotFound desc = could not find container \"34f77948392b88fb4acefb5f56499a97e861d17b70cb0642a04708c2b7fc86a1\": container with ID starting with 34f77948392b88fb4acefb5f56499a97e861d17b70cb0642a04708c2b7fc86a1 not found: ID does not exist" Jan 22 12:05:40 crc kubenswrapper[4752]: I0122 12:05:40.896599 4752 scope.go:117] "RemoveContainer" containerID="075df987b57377fcd90f92dfb2ebcd7e057ba2247b47f4e158940f07c724daa5" Jan 22 12:05:40 crc kubenswrapper[4752]: E0122 12:05:40.896837 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"075df987b57377fcd90f92dfb2ebcd7e057ba2247b47f4e158940f07c724daa5\": container with ID starting with 075df987b57377fcd90f92dfb2ebcd7e057ba2247b47f4e158940f07c724daa5 not found: ID does not exist" containerID="075df987b57377fcd90f92dfb2ebcd7e057ba2247b47f4e158940f07c724daa5" Jan 22 12:05:40 crc kubenswrapper[4752]: I0122 12:05:40.896880 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"075df987b57377fcd90f92dfb2ebcd7e057ba2247b47f4e158940f07c724daa5"} err="failed to get container status \"075df987b57377fcd90f92dfb2ebcd7e057ba2247b47f4e158940f07c724daa5\": rpc error: code = NotFound desc = could not find container \"075df987b57377fcd90f92dfb2ebcd7e057ba2247b47f4e158940f07c724daa5\": container with ID starting with 075df987b57377fcd90f92dfb2ebcd7e057ba2247b47f4e158940f07c724daa5 not found: ID does not exist" Jan 22 12:05:41 crc kubenswrapper[4752]: I0122 12:05:41.118294 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de06ca95-1d55-444d-b6ef-bec7a1b9127e" path="/var/lib/kubelet/pods/de06ca95-1d55-444d-b6ef-bec7a1b9127e/volumes" Jan 22 12:05:53 crc kubenswrapper[4752]: I0122 12:05:53.097720 4752 scope.go:117] "RemoveContainer" containerID="9bdbe0e134edfe8a35f46a9a2371cbd31e79022e45936553737d53546c29d039" Jan 22 12:05:53 crc kubenswrapper[4752]: E0122 12:05:53.098580 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:06:08 crc kubenswrapper[4752]: I0122 12:06:08.097995 4752 scope.go:117] "RemoveContainer" containerID="9bdbe0e134edfe8a35f46a9a2371cbd31e79022e45936553737d53546c29d039" Jan 22 12:06:08 crc kubenswrapper[4752]: E0122 12:06:08.098833 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:06:21 crc kubenswrapper[4752]: I0122 12:06:21.111276 4752 scope.go:117] "RemoveContainer" containerID="9bdbe0e134edfe8a35f46a9a2371cbd31e79022e45936553737d53546c29d039" Jan 22 12:06:21 crc kubenswrapper[4752]: E0122 12:06:21.112181 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:06:32 crc kubenswrapper[4752]: I0122 12:06:32.098187 4752 scope.go:117] "RemoveContainer" containerID="9bdbe0e134edfe8a35f46a9a2371cbd31e79022e45936553737d53546c29d039" Jan 22 12:06:32 crc kubenswrapper[4752]: E0122 12:06:32.098975 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:06:45 crc kubenswrapper[4752]: I0122 12:06:45.098100 4752 scope.go:117] "RemoveContainer" containerID="9bdbe0e134edfe8a35f46a9a2371cbd31e79022e45936553737d53546c29d039" Jan 22 12:06:45 crc kubenswrapper[4752]: E0122 12:06:45.099155 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:07:00 crc kubenswrapper[4752]: I0122 12:07:00.098473 4752 scope.go:117] "RemoveContainer" containerID="9bdbe0e134edfe8a35f46a9a2371cbd31e79022e45936553737d53546c29d039" Jan 22 12:07:00 crc kubenswrapper[4752]: E0122 12:07:00.099351 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:07:13 crc kubenswrapper[4752]: I0122 12:07:13.098201 4752 scope.go:117] "RemoveContainer" containerID="9bdbe0e134edfe8a35f46a9a2371cbd31e79022e45936553737d53546c29d039" Jan 22 12:07:13 crc kubenswrapper[4752]: E0122 12:07:13.099161 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:07:28 crc kubenswrapper[4752]: I0122 12:07:28.098468 4752 scope.go:117] "RemoveContainer" containerID="9bdbe0e134edfe8a35f46a9a2371cbd31e79022e45936553737d53546c29d039" Jan 22 12:07:28 crc kubenswrapper[4752]: E0122 12:07:28.101321 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:07:40 crc kubenswrapper[4752]: I0122 12:07:40.097998 4752 scope.go:117] "RemoveContainer" containerID="9bdbe0e134edfe8a35f46a9a2371cbd31e79022e45936553737d53546c29d039" Jan 22 12:07:40 crc kubenswrapper[4752]: E0122 12:07:40.100391 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:07:55 crc kubenswrapper[4752]: I0122 12:07:55.098789 4752 scope.go:117] "RemoveContainer" containerID="9bdbe0e134edfe8a35f46a9a2371cbd31e79022e45936553737d53546c29d039" Jan 22 12:07:55 crc kubenswrapper[4752]: E0122 12:07:55.099744 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:08:07 crc kubenswrapper[4752]: I0122 12:08:07.098650 4752 scope.go:117] "RemoveContainer" containerID="9bdbe0e134edfe8a35f46a9a2371cbd31e79022e45936553737d53546c29d039" Jan 22 12:08:07 crc kubenswrapper[4752]: E0122 12:08:07.099523 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:08:20 crc kubenswrapper[4752]: I0122 12:08:20.097695 4752 scope.go:117] "RemoveContainer" containerID="9bdbe0e134edfe8a35f46a9a2371cbd31e79022e45936553737d53546c29d039" Jan 22 12:08:20 crc kubenswrapper[4752]: E0122 12:08:20.098371 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:08:34 crc kubenswrapper[4752]: I0122 12:08:34.098207 4752 scope.go:117] "RemoveContainer" containerID="9bdbe0e134edfe8a35f46a9a2371cbd31e79022e45936553737d53546c29d039" Jan 22 12:08:34 crc kubenswrapper[4752]: E0122 12:08:34.099218 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:08:45 crc kubenswrapper[4752]: I0122 12:08:45.098497 4752 scope.go:117] "RemoveContainer" containerID="9bdbe0e134edfe8a35f46a9a2371cbd31e79022e45936553737d53546c29d039" Jan 22 12:08:45 crc kubenswrapper[4752]: E0122 12:08:45.099486 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:08:59 crc kubenswrapper[4752]: I0122 12:08:59.098493 4752 scope.go:117] "RemoveContainer" containerID="9bdbe0e134edfe8a35f46a9a2371cbd31e79022e45936553737d53546c29d039" Jan 22 12:08:59 crc kubenswrapper[4752]: E0122 12:08:59.101425 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:09:11 crc kubenswrapper[4752]: I0122 12:09:11.421354 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5wzxt"] Jan 22 12:09:11 crc kubenswrapper[4752]: E0122 12:09:11.422389 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de06ca95-1d55-444d-b6ef-bec7a1b9127e" containerName="registry-server" Jan 22 12:09:11 crc kubenswrapper[4752]: I0122 12:09:11.422407 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="de06ca95-1d55-444d-b6ef-bec7a1b9127e" containerName="registry-server" Jan 22 12:09:11 crc kubenswrapper[4752]: E0122 12:09:11.422448 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de06ca95-1d55-444d-b6ef-bec7a1b9127e" containerName="extract-utilities" Jan 22 12:09:11 crc kubenswrapper[4752]: I0122 12:09:11.422456 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="de06ca95-1d55-444d-b6ef-bec7a1b9127e" containerName="extract-utilities" Jan 22 12:09:11 crc kubenswrapper[4752]: E0122 12:09:11.422476 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de06ca95-1d55-444d-b6ef-bec7a1b9127e" containerName="extract-content" Jan 22 12:09:11 crc kubenswrapper[4752]: I0122 12:09:11.422484 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="de06ca95-1d55-444d-b6ef-bec7a1b9127e" containerName="extract-content" Jan 22 12:09:11 crc kubenswrapper[4752]: I0122 12:09:11.422718 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="de06ca95-1d55-444d-b6ef-bec7a1b9127e" containerName="registry-server" Jan 22 12:09:11 crc kubenswrapper[4752]: I0122 12:09:11.424549 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wzxt" Jan 22 12:09:11 crc kubenswrapper[4752]: I0122 12:09:11.432417 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wzxt"] Jan 22 12:09:11 crc kubenswrapper[4752]: I0122 12:09:11.535429 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10af58ec-06c0-46e7-bddc-3fbc5f948b39-utilities\") pod \"redhat-marketplace-5wzxt\" (UID: \"10af58ec-06c0-46e7-bddc-3fbc5f948b39\") " pod="openshift-marketplace/redhat-marketplace-5wzxt" Jan 22 12:09:11 crc kubenswrapper[4752]: I0122 12:09:11.535694 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10af58ec-06c0-46e7-bddc-3fbc5f948b39-catalog-content\") pod \"redhat-marketplace-5wzxt\" (UID: \"10af58ec-06c0-46e7-bddc-3fbc5f948b39\") " pod="openshift-marketplace/redhat-marketplace-5wzxt" Jan 22 12:09:11 crc kubenswrapper[4752]: I0122 12:09:11.535808 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8588m\" (UniqueName: \"kubernetes.io/projected/10af58ec-06c0-46e7-bddc-3fbc5f948b39-kube-api-access-8588m\") pod \"redhat-marketplace-5wzxt\" (UID: \"10af58ec-06c0-46e7-bddc-3fbc5f948b39\") " pod="openshift-marketplace/redhat-marketplace-5wzxt" Jan 22 12:09:11 crc kubenswrapper[4752]: I0122 12:09:11.637666 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10af58ec-06c0-46e7-bddc-3fbc5f948b39-catalog-content\") pod \"redhat-marketplace-5wzxt\" (UID: \"10af58ec-06c0-46e7-bddc-3fbc5f948b39\") " pod="openshift-marketplace/redhat-marketplace-5wzxt" Jan 22 12:09:11 crc kubenswrapper[4752]: I0122 12:09:11.637711 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8588m\" (UniqueName: \"kubernetes.io/projected/10af58ec-06c0-46e7-bddc-3fbc5f948b39-kube-api-access-8588m\") pod \"redhat-marketplace-5wzxt\" (UID: \"10af58ec-06c0-46e7-bddc-3fbc5f948b39\") " pod="openshift-marketplace/redhat-marketplace-5wzxt" Jan 22 12:09:11 crc kubenswrapper[4752]: I0122 12:09:11.637795 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10af58ec-06c0-46e7-bddc-3fbc5f948b39-utilities\") pod \"redhat-marketplace-5wzxt\" (UID: \"10af58ec-06c0-46e7-bddc-3fbc5f948b39\") " pod="openshift-marketplace/redhat-marketplace-5wzxt" Jan 22 12:09:11 crc kubenswrapper[4752]: I0122 12:09:11.638143 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10af58ec-06c0-46e7-bddc-3fbc5f948b39-catalog-content\") pod \"redhat-marketplace-5wzxt\" (UID: \"10af58ec-06c0-46e7-bddc-3fbc5f948b39\") " pod="openshift-marketplace/redhat-marketplace-5wzxt" Jan 22 12:09:11 crc kubenswrapper[4752]: I0122 12:09:11.638196 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10af58ec-06c0-46e7-bddc-3fbc5f948b39-utilities\") pod \"redhat-marketplace-5wzxt\" (UID: \"10af58ec-06c0-46e7-bddc-3fbc5f948b39\") " pod="openshift-marketplace/redhat-marketplace-5wzxt" Jan 22 12:09:11 crc kubenswrapper[4752]: I0122 12:09:11.664801 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8588m\" (UniqueName: \"kubernetes.io/projected/10af58ec-06c0-46e7-bddc-3fbc5f948b39-kube-api-access-8588m\") pod \"redhat-marketplace-5wzxt\" (UID: \"10af58ec-06c0-46e7-bddc-3fbc5f948b39\") " pod="openshift-marketplace/redhat-marketplace-5wzxt" Jan 22 12:09:11 crc kubenswrapper[4752]: I0122 12:09:11.744626 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wzxt" Jan 22 12:09:12 crc kubenswrapper[4752]: I0122 12:09:12.098190 4752 scope.go:117] "RemoveContainer" containerID="9bdbe0e134edfe8a35f46a9a2371cbd31e79022e45936553737d53546c29d039" Jan 22 12:09:12 crc kubenswrapper[4752]: E0122 12:09:12.098654 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:09:12 crc kubenswrapper[4752]: I0122 12:09:12.255746 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wzxt"] Jan 22 12:09:12 crc kubenswrapper[4752]: W0122 12:09:12.264419 4752 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10af58ec_06c0_46e7_bddc_3fbc5f948b39.slice/crio-595e05a9000fda584feac42ac21a704c06ab91249cb8699d04d36a1ff9ab0c74 WatchSource:0}: Error finding container 595e05a9000fda584feac42ac21a704c06ab91249cb8699d04d36a1ff9ab0c74: Status 404 returned error can't find the container with id 595e05a9000fda584feac42ac21a704c06ab91249cb8699d04d36a1ff9ab0c74 Jan 22 12:09:13 crc kubenswrapper[4752]: I0122 12:09:13.062138 4752 generic.go:334] "Generic (PLEG): container finished" podID="10af58ec-06c0-46e7-bddc-3fbc5f948b39" containerID="f3eef76fd4767fc6c1838f13d42166e285ed7e53bd85988de47a5c7a1ba155cd" exitCode=0 Jan 22 12:09:13 crc kubenswrapper[4752]: I0122 12:09:13.062230 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wzxt" event={"ID":"10af58ec-06c0-46e7-bddc-3fbc5f948b39","Type":"ContainerDied","Data":"f3eef76fd4767fc6c1838f13d42166e285ed7e53bd85988de47a5c7a1ba155cd"} Jan 22 12:09:13 crc kubenswrapper[4752]: I0122 12:09:13.062533 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wzxt" event={"ID":"10af58ec-06c0-46e7-bddc-3fbc5f948b39","Type":"ContainerStarted","Data":"595e05a9000fda584feac42ac21a704c06ab91249cb8699d04d36a1ff9ab0c74"} Jan 22 12:09:14 crc kubenswrapper[4752]: I0122 12:09:14.081662 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wzxt" event={"ID":"10af58ec-06c0-46e7-bddc-3fbc5f948b39","Type":"ContainerStarted","Data":"95759d0cf363aad6988a7d6c2a930dd8722871a71f1bc8199343b299576de9a7"} Jan 22 12:09:15 crc kubenswrapper[4752]: I0122 12:09:15.092659 4752 generic.go:334] "Generic (PLEG): container finished" podID="10af58ec-06c0-46e7-bddc-3fbc5f948b39" containerID="95759d0cf363aad6988a7d6c2a930dd8722871a71f1bc8199343b299576de9a7" exitCode=0 Jan 22 12:09:15 crc kubenswrapper[4752]: I0122 12:09:15.092705 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wzxt" event={"ID":"10af58ec-06c0-46e7-bddc-3fbc5f948b39","Type":"ContainerDied","Data":"95759d0cf363aad6988a7d6c2a930dd8722871a71f1bc8199343b299576de9a7"} Jan 22 12:09:16 crc kubenswrapper[4752]: I0122 12:09:16.105297 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wzxt" event={"ID":"10af58ec-06c0-46e7-bddc-3fbc5f948b39","Type":"ContainerStarted","Data":"d4727e2eef12293aa5e2d4ab7866b124eb2f392334e8842a7a1ad3ea3b20c13f"} Jan 22 12:09:16 crc kubenswrapper[4752]: I0122 12:09:16.129671 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5wzxt" podStartSLOduration=2.683991567 podStartE2EDuration="5.12964648s" podCreationTimestamp="2026-01-22 12:09:11 +0000 UTC" firstStartedPulling="2026-01-22 12:09:13.063848391 +0000 UTC m=+6232.293791309" lastFinishedPulling="2026-01-22 12:09:15.509503304 +0000 UTC m=+6234.739446222" observedRunningTime="2026-01-22 12:09:16.126283532 +0000 UTC m=+6235.356226460" watchObservedRunningTime="2026-01-22 12:09:16.12964648 +0000 UTC m=+6235.359589388" Jan 22 12:09:21 crc kubenswrapper[4752]: I0122 12:09:21.745112 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5wzxt" Jan 22 12:09:21 crc kubenswrapper[4752]: I0122 12:09:21.745663 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5wzxt" Jan 22 12:09:21 crc kubenswrapper[4752]: I0122 12:09:21.802729 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5wzxt" Jan 22 12:09:22 crc kubenswrapper[4752]: I0122 12:09:22.219376 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5wzxt" Jan 22 12:09:22 crc kubenswrapper[4752]: I0122 12:09:22.269735 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wzxt"] Jan 22 12:09:24 crc kubenswrapper[4752]: I0122 12:09:24.182017 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5wzxt" podUID="10af58ec-06c0-46e7-bddc-3fbc5f948b39" containerName="registry-server" containerID="cri-o://d4727e2eef12293aa5e2d4ab7866b124eb2f392334e8842a7a1ad3ea3b20c13f" gracePeriod=2 Jan 22 12:09:25 crc kubenswrapper[4752]: I0122 12:09:25.101329 4752 scope.go:117] "RemoveContainer" containerID="9bdbe0e134edfe8a35f46a9a2371cbd31e79022e45936553737d53546c29d039" Jan 22 12:09:25 crc kubenswrapper[4752]: E0122 12:09:25.101831 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:09:25 crc kubenswrapper[4752]: I0122 12:09:25.192499 4752 generic.go:334] "Generic (PLEG): container finished" podID="10af58ec-06c0-46e7-bddc-3fbc5f948b39" containerID="d4727e2eef12293aa5e2d4ab7866b124eb2f392334e8842a7a1ad3ea3b20c13f" exitCode=0 Jan 22 12:09:25 crc kubenswrapper[4752]: I0122 12:09:25.192572 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wzxt" event={"ID":"10af58ec-06c0-46e7-bddc-3fbc5f948b39","Type":"ContainerDied","Data":"d4727e2eef12293aa5e2d4ab7866b124eb2f392334e8842a7a1ad3ea3b20c13f"} Jan 22 12:09:25 crc kubenswrapper[4752]: I0122 12:09:25.192615 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wzxt" event={"ID":"10af58ec-06c0-46e7-bddc-3fbc5f948b39","Type":"ContainerDied","Data":"595e05a9000fda584feac42ac21a704c06ab91249cb8699d04d36a1ff9ab0c74"} Jan 22 12:09:25 crc kubenswrapper[4752]: I0122 12:09:25.192631 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="595e05a9000fda584feac42ac21a704c06ab91249cb8699d04d36a1ff9ab0c74" Jan 22 12:09:25 crc kubenswrapper[4752]: I0122 12:09:25.212346 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wzxt" Jan 22 12:09:25 crc kubenswrapper[4752]: I0122 12:09:25.330744 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10af58ec-06c0-46e7-bddc-3fbc5f948b39-utilities\") pod \"10af58ec-06c0-46e7-bddc-3fbc5f948b39\" (UID: \"10af58ec-06c0-46e7-bddc-3fbc5f948b39\") " Jan 22 12:09:25 crc kubenswrapper[4752]: I0122 12:09:25.331221 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10af58ec-06c0-46e7-bddc-3fbc5f948b39-catalog-content\") pod \"10af58ec-06c0-46e7-bddc-3fbc5f948b39\" (UID: \"10af58ec-06c0-46e7-bddc-3fbc5f948b39\") " Jan 22 12:09:25 crc kubenswrapper[4752]: I0122 12:09:25.331402 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8588m\" (UniqueName: \"kubernetes.io/projected/10af58ec-06c0-46e7-bddc-3fbc5f948b39-kube-api-access-8588m\") pod \"10af58ec-06c0-46e7-bddc-3fbc5f948b39\" (UID: \"10af58ec-06c0-46e7-bddc-3fbc5f948b39\") " Jan 22 12:09:25 crc kubenswrapper[4752]: I0122 12:09:25.331596 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10af58ec-06c0-46e7-bddc-3fbc5f948b39-utilities" (OuterVolumeSpecName: "utilities") pod "10af58ec-06c0-46e7-bddc-3fbc5f948b39" (UID: "10af58ec-06c0-46e7-bddc-3fbc5f948b39"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:09:25 crc kubenswrapper[4752]: I0122 12:09:25.332062 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10af58ec-06c0-46e7-bddc-3fbc5f948b39-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:09:25 crc kubenswrapper[4752]: I0122 12:09:25.345647 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10af58ec-06c0-46e7-bddc-3fbc5f948b39-kube-api-access-8588m" (OuterVolumeSpecName: "kube-api-access-8588m") pod "10af58ec-06c0-46e7-bddc-3fbc5f948b39" (UID: "10af58ec-06c0-46e7-bddc-3fbc5f948b39"). InnerVolumeSpecName "kube-api-access-8588m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:09:25 crc kubenswrapper[4752]: I0122 12:09:25.351761 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10af58ec-06c0-46e7-bddc-3fbc5f948b39-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10af58ec-06c0-46e7-bddc-3fbc5f948b39" (UID: "10af58ec-06c0-46e7-bddc-3fbc5f948b39"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:09:25 crc kubenswrapper[4752]: I0122 12:09:25.433972 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10af58ec-06c0-46e7-bddc-3fbc5f948b39-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:09:25 crc kubenswrapper[4752]: I0122 12:09:25.434009 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8588m\" (UniqueName: \"kubernetes.io/projected/10af58ec-06c0-46e7-bddc-3fbc5f948b39-kube-api-access-8588m\") on node \"crc\" DevicePath \"\"" Jan 22 12:09:26 crc kubenswrapper[4752]: I0122 12:09:26.207284 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wzxt" Jan 22 12:09:26 crc kubenswrapper[4752]: I0122 12:09:26.274535 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wzxt"] Jan 22 12:09:26 crc kubenswrapper[4752]: I0122 12:09:26.285082 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wzxt"] Jan 22 12:09:27 crc kubenswrapper[4752]: I0122 12:09:27.108507 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10af58ec-06c0-46e7-bddc-3fbc5f948b39" path="/var/lib/kubelet/pods/10af58ec-06c0-46e7-bddc-3fbc5f948b39/volumes" Jan 22 12:09:40 crc kubenswrapper[4752]: I0122 12:09:40.098256 4752 scope.go:117] "RemoveContainer" containerID="9bdbe0e134edfe8a35f46a9a2371cbd31e79022e45936553737d53546c29d039" Jan 22 12:09:40 crc kubenswrapper[4752]: E0122 12:09:40.099027 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:09:54 crc kubenswrapper[4752]: I0122 12:09:54.098377 4752 scope.go:117] "RemoveContainer" containerID="9bdbe0e134edfe8a35f46a9a2371cbd31e79022e45936553737d53546c29d039" Jan 22 12:09:54 crc kubenswrapper[4752]: E0122 12:09:54.099383 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:10:08 crc kubenswrapper[4752]: I0122 12:10:08.098909 4752 scope.go:117] "RemoveContainer" containerID="9bdbe0e134edfe8a35f46a9a2371cbd31e79022e45936553737d53546c29d039" Jan 22 12:10:08 crc kubenswrapper[4752]: E0122 12:10:08.099727 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:10:19 crc kubenswrapper[4752]: I0122 12:10:19.097882 4752 scope.go:117] "RemoveContainer" containerID="9bdbe0e134edfe8a35f46a9a2371cbd31e79022e45936553737d53546c29d039" Jan 22 12:10:19 crc kubenswrapper[4752]: E0122 12:10:19.098638 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:10:30 crc kubenswrapper[4752]: I0122 12:10:30.098201 4752 scope.go:117] "RemoveContainer" containerID="9bdbe0e134edfe8a35f46a9a2371cbd31e79022e45936553737d53546c29d039" Jan 22 12:10:30 crc kubenswrapper[4752]: I0122 12:10:30.865963 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerStarted","Data":"98da58823f2068f6df4d900457c09373aba62fadfc66f786c1bd71ec543e45d3"} Jan 22 12:12:57 crc kubenswrapper[4752]: I0122 12:12:57.724248 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:12:57 crc kubenswrapper[4752]: I0122 12:12:57.724707 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:13:27 crc kubenswrapper[4752]: I0122 12:13:27.724396 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:13:27 crc kubenswrapper[4752]: I0122 12:13:27.725326 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:13:57 crc kubenswrapper[4752]: I0122 12:13:57.723638 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:13:57 crc kubenswrapper[4752]: I0122 12:13:57.724253 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:13:57 crc kubenswrapper[4752]: I0122 12:13:57.724298 4752 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 12:13:57 crc kubenswrapper[4752]: I0122 12:13:57.725078 4752 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"98da58823f2068f6df4d900457c09373aba62fadfc66f786c1bd71ec543e45d3"} pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:13:57 crc kubenswrapper[4752]: I0122 12:13:57.725130 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" containerID="cri-o://98da58823f2068f6df4d900457c09373aba62fadfc66f786c1bd71ec543e45d3" gracePeriod=600 Jan 22 12:13:58 crc kubenswrapper[4752]: I0122 12:13:58.000938 4752 generic.go:334] "Generic (PLEG): container finished" podID="eb8df70c-9474-4827-8831-f39fc6883d79" containerID="98da58823f2068f6df4d900457c09373aba62fadfc66f786c1bd71ec543e45d3" exitCode=0 Jan 22 12:13:58 crc kubenswrapper[4752]: I0122 12:13:58.001005 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerDied","Data":"98da58823f2068f6df4d900457c09373aba62fadfc66f786c1bd71ec543e45d3"} Jan 22 12:13:58 crc kubenswrapper[4752]: I0122 12:13:58.001444 4752 scope.go:117] "RemoveContainer" containerID="9bdbe0e134edfe8a35f46a9a2371cbd31e79022e45936553737d53546c29d039" Jan 22 12:13:59 crc kubenswrapper[4752]: I0122 12:13:59.014676 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerStarted","Data":"722914d5b6484ced4833b41bf108d126b645f7169e2294629e0c22b5735c83fb"} Jan 22 12:15:00 crc kubenswrapper[4752]: I0122 12:15:00.157758 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484735-wxjjg"] Jan 22 12:15:00 crc kubenswrapper[4752]: E0122 12:15:00.158780 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10af58ec-06c0-46e7-bddc-3fbc5f948b39" containerName="registry-server" Jan 22 12:15:00 crc kubenswrapper[4752]: I0122 12:15:00.158798 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="10af58ec-06c0-46e7-bddc-3fbc5f948b39" containerName="registry-server" Jan 22 12:15:00 crc kubenswrapper[4752]: E0122 12:15:00.158836 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10af58ec-06c0-46e7-bddc-3fbc5f948b39" containerName="extract-utilities" Jan 22 12:15:00 crc kubenswrapper[4752]: I0122 12:15:00.158844 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="10af58ec-06c0-46e7-bddc-3fbc5f948b39" containerName="extract-utilities" Jan 22 12:15:00 crc kubenswrapper[4752]: E0122 12:15:00.158880 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10af58ec-06c0-46e7-bddc-3fbc5f948b39" containerName="extract-content" Jan 22 12:15:00 crc kubenswrapper[4752]: I0122 12:15:00.158889 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="10af58ec-06c0-46e7-bddc-3fbc5f948b39" containerName="extract-content" Jan 22 12:15:00 crc kubenswrapper[4752]: I0122 12:15:00.159130 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="10af58ec-06c0-46e7-bddc-3fbc5f948b39" containerName="registry-server" Jan 22 12:15:00 crc kubenswrapper[4752]: I0122 12:15:00.160056 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-wxjjg" Jan 22 12:15:00 crc kubenswrapper[4752]: I0122 12:15:00.162121 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 12:15:00 crc kubenswrapper[4752]: I0122 12:15:00.162266 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 12:15:00 crc kubenswrapper[4752]: I0122 12:15:00.174643 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484735-wxjjg"] Jan 22 12:15:00 crc kubenswrapper[4752]: I0122 12:15:00.300744 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e96ea0a2-dbbc-4953-a470-6254841f35d1-config-volume\") pod \"collect-profiles-29484735-wxjjg\" (UID: \"e96ea0a2-dbbc-4953-a470-6254841f35d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-wxjjg" Jan 22 12:15:00 crc kubenswrapper[4752]: I0122 12:15:00.300839 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6pbz\" (UniqueName: \"kubernetes.io/projected/e96ea0a2-dbbc-4953-a470-6254841f35d1-kube-api-access-p6pbz\") pod \"collect-profiles-29484735-wxjjg\" (UID: \"e96ea0a2-dbbc-4953-a470-6254841f35d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-wxjjg" Jan 22 12:15:00 crc kubenswrapper[4752]: I0122 12:15:00.300940 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e96ea0a2-dbbc-4953-a470-6254841f35d1-secret-volume\") pod \"collect-profiles-29484735-wxjjg\" (UID: \"e96ea0a2-dbbc-4953-a470-6254841f35d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-wxjjg" Jan 22 12:15:00 crc kubenswrapper[4752]: I0122 12:15:00.403371 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6pbz\" (UniqueName: \"kubernetes.io/projected/e96ea0a2-dbbc-4953-a470-6254841f35d1-kube-api-access-p6pbz\") pod \"collect-profiles-29484735-wxjjg\" (UID: \"e96ea0a2-dbbc-4953-a470-6254841f35d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-wxjjg" Jan 22 12:15:00 crc kubenswrapper[4752]: I0122 12:15:00.403470 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e96ea0a2-dbbc-4953-a470-6254841f35d1-secret-volume\") pod \"collect-profiles-29484735-wxjjg\" (UID: \"e96ea0a2-dbbc-4953-a470-6254841f35d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-wxjjg" Jan 22 12:15:00 crc kubenswrapper[4752]: I0122 12:15:00.403743 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e96ea0a2-dbbc-4953-a470-6254841f35d1-config-volume\") pod \"collect-profiles-29484735-wxjjg\" (UID: \"e96ea0a2-dbbc-4953-a470-6254841f35d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-wxjjg" Jan 22 12:15:00 crc kubenswrapper[4752]: I0122 12:15:00.404523 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e96ea0a2-dbbc-4953-a470-6254841f35d1-config-volume\") pod \"collect-profiles-29484735-wxjjg\" (UID: \"e96ea0a2-dbbc-4953-a470-6254841f35d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-wxjjg" Jan 22 12:15:00 crc kubenswrapper[4752]: I0122 12:15:00.416670 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e96ea0a2-dbbc-4953-a470-6254841f35d1-secret-volume\") pod \"collect-profiles-29484735-wxjjg\" (UID: \"e96ea0a2-dbbc-4953-a470-6254841f35d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-wxjjg" Jan 22 12:15:00 crc kubenswrapper[4752]: I0122 12:15:00.420634 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6pbz\" (UniqueName: \"kubernetes.io/projected/e96ea0a2-dbbc-4953-a470-6254841f35d1-kube-api-access-p6pbz\") pod \"collect-profiles-29484735-wxjjg\" (UID: \"e96ea0a2-dbbc-4953-a470-6254841f35d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-wxjjg" Jan 22 12:15:00 crc kubenswrapper[4752]: I0122 12:15:00.484259 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-wxjjg" Jan 22 12:15:00 crc kubenswrapper[4752]: I0122 12:15:00.932231 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484735-wxjjg"] Jan 22 12:15:01 crc kubenswrapper[4752]: I0122 12:15:01.626480 4752 generic.go:334] "Generic (PLEG): container finished" podID="e96ea0a2-dbbc-4953-a470-6254841f35d1" containerID="e007ef76a8051acc41a2176c548df2b6e1b5b2942f671790d1337816ece82f95" exitCode=0 Jan 22 12:15:01 crc kubenswrapper[4752]: I0122 12:15:01.626523 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-wxjjg" event={"ID":"e96ea0a2-dbbc-4953-a470-6254841f35d1","Type":"ContainerDied","Data":"e007ef76a8051acc41a2176c548df2b6e1b5b2942f671790d1337816ece82f95"} Jan 22 12:15:01 crc kubenswrapper[4752]: I0122 12:15:01.626769 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-wxjjg" event={"ID":"e96ea0a2-dbbc-4953-a470-6254841f35d1","Type":"ContainerStarted","Data":"c483679c88de4e4e415196cdc2d286b98f3f60c291f1fea545fce547641a7d2b"} Jan 22 12:15:03 crc kubenswrapper[4752]: I0122 12:15:03.004280 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-wxjjg" Jan 22 12:15:03 crc kubenswrapper[4752]: I0122 12:15:03.065606 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e96ea0a2-dbbc-4953-a470-6254841f35d1-secret-volume\") pod \"e96ea0a2-dbbc-4953-a470-6254841f35d1\" (UID: \"e96ea0a2-dbbc-4953-a470-6254841f35d1\") " Jan 22 12:15:03 crc kubenswrapper[4752]: I0122 12:15:03.065739 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e96ea0a2-dbbc-4953-a470-6254841f35d1-config-volume\") pod \"e96ea0a2-dbbc-4953-a470-6254841f35d1\" (UID: \"e96ea0a2-dbbc-4953-a470-6254841f35d1\") " Jan 22 12:15:03 crc kubenswrapper[4752]: I0122 12:15:03.065969 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6pbz\" (UniqueName: \"kubernetes.io/projected/e96ea0a2-dbbc-4953-a470-6254841f35d1-kube-api-access-p6pbz\") pod \"e96ea0a2-dbbc-4953-a470-6254841f35d1\" (UID: \"e96ea0a2-dbbc-4953-a470-6254841f35d1\") " Jan 22 12:15:03 crc kubenswrapper[4752]: I0122 12:15:03.066463 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e96ea0a2-dbbc-4953-a470-6254841f35d1-config-volume" (OuterVolumeSpecName: "config-volume") pod "e96ea0a2-dbbc-4953-a470-6254841f35d1" (UID: "e96ea0a2-dbbc-4953-a470-6254841f35d1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 12:15:03 crc kubenswrapper[4752]: I0122 12:15:03.066647 4752 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e96ea0a2-dbbc-4953-a470-6254841f35d1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 12:15:03 crc kubenswrapper[4752]: I0122 12:15:03.073126 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e96ea0a2-dbbc-4953-a470-6254841f35d1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e96ea0a2-dbbc-4953-a470-6254841f35d1" (UID: "e96ea0a2-dbbc-4953-a470-6254841f35d1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 12:15:03 crc kubenswrapper[4752]: I0122 12:15:03.077289 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e96ea0a2-dbbc-4953-a470-6254841f35d1-kube-api-access-p6pbz" (OuterVolumeSpecName: "kube-api-access-p6pbz") pod "e96ea0a2-dbbc-4953-a470-6254841f35d1" (UID: "e96ea0a2-dbbc-4953-a470-6254841f35d1"). InnerVolumeSpecName "kube-api-access-p6pbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:15:03 crc kubenswrapper[4752]: I0122 12:15:03.168912 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6pbz\" (UniqueName: \"kubernetes.io/projected/e96ea0a2-dbbc-4953-a470-6254841f35d1-kube-api-access-p6pbz\") on node \"crc\" DevicePath \"\"" Jan 22 12:15:03 crc kubenswrapper[4752]: I0122 12:15:03.168952 4752 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e96ea0a2-dbbc-4953-a470-6254841f35d1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 12:15:03 crc kubenswrapper[4752]: I0122 12:15:03.646340 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-wxjjg" event={"ID":"e96ea0a2-dbbc-4953-a470-6254841f35d1","Type":"ContainerDied","Data":"c483679c88de4e4e415196cdc2d286b98f3f60c291f1fea545fce547641a7d2b"} Jan 22 12:15:03 crc kubenswrapper[4752]: I0122 12:15:03.646383 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-wxjjg" Jan 22 12:15:03 crc kubenswrapper[4752]: I0122 12:15:03.646389 4752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c483679c88de4e4e415196cdc2d286b98f3f60c291f1fea545fce547641a7d2b" Jan 22 12:15:04 crc kubenswrapper[4752]: I0122 12:15:04.080331 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484690-vlj44"] Jan 22 12:15:04 crc kubenswrapper[4752]: I0122 12:15:04.088565 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484690-vlj44"] Jan 22 12:15:05 crc kubenswrapper[4752]: I0122 12:15:05.113318 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de46e919-7db3-4f81-af16-9df1b2e1e114" path="/var/lib/kubelet/pods/de46e919-7db3-4f81-af16-9df1b2e1e114/volumes" Jan 22 12:15:48 crc kubenswrapper[4752]: I0122 12:15:48.783890 4752 scope.go:117] "RemoveContainer" containerID="f3eef76fd4767fc6c1838f13d42166e285ed7e53bd85988de47a5c7a1ba155cd" Jan 22 12:15:48 crc kubenswrapper[4752]: I0122 12:15:48.814791 4752 scope.go:117] "RemoveContainer" containerID="d4727e2eef12293aa5e2d4ab7866b124eb2f392334e8842a7a1ad3ea3b20c13f" Jan 22 12:15:48 crc kubenswrapper[4752]: I0122 12:15:48.887305 4752 scope.go:117] "RemoveContainer" containerID="95759d0cf363aad6988a7d6c2a930dd8722871a71f1bc8199343b299576de9a7" Jan 22 12:15:48 crc kubenswrapper[4752]: I0122 12:15:48.910274 4752 scope.go:117] "RemoveContainer" containerID="9f4be302563835280b8e4a49a87fa03de8ad8752a031de3064e436094fa2f60d" Jan 22 12:16:27 crc kubenswrapper[4752]: I0122 12:16:27.723892 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:16:27 crc kubenswrapper[4752]: I0122 12:16:27.724594 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:16:50 crc kubenswrapper[4752]: I0122 12:16:50.800160 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pj2tq"] Jan 22 12:16:50 crc kubenswrapper[4752]: E0122 12:16:50.801300 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e96ea0a2-dbbc-4953-a470-6254841f35d1" containerName="collect-profiles" Jan 22 12:16:50 crc kubenswrapper[4752]: I0122 12:16:50.801323 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="e96ea0a2-dbbc-4953-a470-6254841f35d1" containerName="collect-profiles" Jan 22 12:16:50 crc kubenswrapper[4752]: I0122 12:16:50.801517 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="e96ea0a2-dbbc-4953-a470-6254841f35d1" containerName="collect-profiles" Jan 22 12:16:50 crc kubenswrapper[4752]: I0122 12:16:50.803374 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pj2tq" Jan 22 12:16:50 crc kubenswrapper[4752]: I0122 12:16:50.831554 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pj2tq"] Jan 22 12:16:50 crc kubenswrapper[4752]: I0122 12:16:50.897997 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd5bed34-d375-435c-b877-d836c8e314be-catalog-content\") pod \"community-operators-pj2tq\" (UID: \"bd5bed34-d375-435c-b877-d836c8e314be\") " pod="openshift-marketplace/community-operators-pj2tq" Jan 22 12:16:50 crc kubenswrapper[4752]: I0122 12:16:50.898074 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd5bed34-d375-435c-b877-d836c8e314be-utilities\") pod \"community-operators-pj2tq\" (UID: \"bd5bed34-d375-435c-b877-d836c8e314be\") " pod="openshift-marketplace/community-operators-pj2tq" Jan 22 12:16:50 crc kubenswrapper[4752]: I0122 12:16:50.898160 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tv99\" (UniqueName: \"kubernetes.io/projected/bd5bed34-d375-435c-b877-d836c8e314be-kube-api-access-7tv99\") pod \"community-operators-pj2tq\" (UID: \"bd5bed34-d375-435c-b877-d836c8e314be\") " pod="openshift-marketplace/community-operators-pj2tq" Jan 22 12:16:50 crc kubenswrapper[4752]: I0122 12:16:50.999266 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd5bed34-d375-435c-b877-d836c8e314be-catalog-content\") pod \"community-operators-pj2tq\" (UID: \"bd5bed34-d375-435c-b877-d836c8e314be\") " pod="openshift-marketplace/community-operators-pj2tq" Jan 22 12:16:50 crc kubenswrapper[4752]: I0122 12:16:50.999621 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd5bed34-d375-435c-b877-d836c8e314be-utilities\") pod \"community-operators-pj2tq\" (UID: \"bd5bed34-d375-435c-b877-d836c8e314be\") " pod="openshift-marketplace/community-operators-pj2tq" Jan 22 12:16:50 crc kubenswrapper[4752]: I0122 12:16:50.999666 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tv99\" (UniqueName: \"kubernetes.io/projected/bd5bed34-d375-435c-b877-d836c8e314be-kube-api-access-7tv99\") pod \"community-operators-pj2tq\" (UID: \"bd5bed34-d375-435c-b877-d836c8e314be\") " pod="openshift-marketplace/community-operators-pj2tq" Jan 22 12:16:51 crc kubenswrapper[4752]: I0122 12:16:51.000292 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd5bed34-d375-435c-b877-d836c8e314be-catalog-content\") pod \"community-operators-pj2tq\" (UID: \"bd5bed34-d375-435c-b877-d836c8e314be\") " pod="openshift-marketplace/community-operators-pj2tq" Jan 22 12:16:51 crc kubenswrapper[4752]: I0122 12:16:51.000319 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd5bed34-d375-435c-b877-d836c8e314be-utilities\") pod \"community-operators-pj2tq\" (UID: \"bd5bed34-d375-435c-b877-d836c8e314be\") " pod="openshift-marketplace/community-operators-pj2tq" Jan 22 12:16:51 crc kubenswrapper[4752]: I0122 12:16:51.018775 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tv99\" (UniqueName: \"kubernetes.io/projected/bd5bed34-d375-435c-b877-d836c8e314be-kube-api-access-7tv99\") pod \"community-operators-pj2tq\" (UID: \"bd5bed34-d375-435c-b877-d836c8e314be\") " pod="openshift-marketplace/community-operators-pj2tq" Jan 22 12:16:51 crc kubenswrapper[4752]: I0122 12:16:51.136320 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pj2tq" Jan 22 12:16:51 crc kubenswrapper[4752]: I0122 12:16:51.759975 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pj2tq"] Jan 22 12:16:51 crc kubenswrapper[4752]: I0122 12:16:51.786282 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pj2tq" event={"ID":"bd5bed34-d375-435c-b877-d836c8e314be","Type":"ContainerStarted","Data":"a51aabb0f06bd4a0cf03de784252436f0f3703ff6ef3d7c92588497952408525"} Jan 22 12:16:52 crc kubenswrapper[4752]: I0122 12:16:52.798098 4752 generic.go:334] "Generic (PLEG): container finished" podID="bd5bed34-d375-435c-b877-d836c8e314be" containerID="4bb02013d6b0167d6897c7a4ffca0e65e967118c20cf033f4725aaae5a4356a7" exitCode=0 Jan 22 12:16:52 crc kubenswrapper[4752]: I0122 12:16:52.798154 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pj2tq" event={"ID":"bd5bed34-d375-435c-b877-d836c8e314be","Type":"ContainerDied","Data":"4bb02013d6b0167d6897c7a4ffca0e65e967118c20cf033f4725aaae5a4356a7"} Jan 22 12:16:52 crc kubenswrapper[4752]: I0122 12:16:52.800410 4752 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 12:16:53 crc kubenswrapper[4752]: I0122 12:16:53.825036 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pj2tq" event={"ID":"bd5bed34-d375-435c-b877-d836c8e314be","Type":"ContainerStarted","Data":"8081dda8c89e67ef0751d30db7a6ec453e447dee376a62029e07a647946b5c7c"} Jan 22 12:16:54 crc kubenswrapper[4752]: I0122 12:16:54.835994 4752 generic.go:334] "Generic (PLEG): container finished" podID="bd5bed34-d375-435c-b877-d836c8e314be" containerID="8081dda8c89e67ef0751d30db7a6ec453e447dee376a62029e07a647946b5c7c" exitCode=0 Jan 22 12:16:54 crc kubenswrapper[4752]: I0122 12:16:54.836053 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pj2tq" event={"ID":"bd5bed34-d375-435c-b877-d836c8e314be","Type":"ContainerDied","Data":"8081dda8c89e67ef0751d30db7a6ec453e447dee376a62029e07a647946b5c7c"} Jan 22 12:16:56 crc kubenswrapper[4752]: I0122 12:16:56.858506 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pj2tq" event={"ID":"bd5bed34-d375-435c-b877-d836c8e314be","Type":"ContainerStarted","Data":"e0b111256650585b300d02eba78151654afdf10ccd428534237c8945cd73724e"} Jan 22 12:16:56 crc kubenswrapper[4752]: I0122 12:16:56.886626 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pj2tq" podStartSLOduration=3.98228199 podStartE2EDuration="6.886602814s" podCreationTimestamp="2026-01-22 12:16:50 +0000 UTC" firstStartedPulling="2026-01-22 12:16:52.800179689 +0000 UTC m=+6692.030122597" lastFinishedPulling="2026-01-22 12:16:55.704500513 +0000 UTC m=+6694.934443421" observedRunningTime="2026-01-22 12:16:56.876408198 +0000 UTC m=+6696.106351126" watchObservedRunningTime="2026-01-22 12:16:56.886602814 +0000 UTC m=+6696.116545722" Jan 22 12:16:57 crc kubenswrapper[4752]: I0122 12:16:57.724012 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:16:57 crc kubenswrapper[4752]: I0122 12:16:57.724069 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:17:01 crc kubenswrapper[4752]: I0122 12:17:01.139203 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pj2tq" Jan 22 12:17:01 crc kubenswrapper[4752]: I0122 12:17:01.140003 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pj2tq" Jan 22 12:17:01 crc kubenswrapper[4752]: I0122 12:17:01.196767 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pj2tq" Jan 22 12:17:01 crc kubenswrapper[4752]: I0122 12:17:01.970964 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pj2tq" Jan 22 12:17:02 crc kubenswrapper[4752]: I0122 12:17:02.026406 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pj2tq"] Jan 22 12:17:03 crc kubenswrapper[4752]: I0122 12:17:03.931401 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pj2tq" podUID="bd5bed34-d375-435c-b877-d836c8e314be" containerName="registry-server" containerID="cri-o://e0b111256650585b300d02eba78151654afdf10ccd428534237c8945cd73724e" gracePeriod=2 Jan 22 12:17:04 crc kubenswrapper[4752]: I0122 12:17:04.434825 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pj2tq" Jan 22 12:17:04 crc kubenswrapper[4752]: I0122 12:17:04.611816 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tv99\" (UniqueName: \"kubernetes.io/projected/bd5bed34-d375-435c-b877-d836c8e314be-kube-api-access-7tv99\") pod \"bd5bed34-d375-435c-b877-d836c8e314be\" (UID: \"bd5bed34-d375-435c-b877-d836c8e314be\") " Jan 22 12:17:04 crc kubenswrapper[4752]: I0122 12:17:04.611943 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd5bed34-d375-435c-b877-d836c8e314be-utilities\") pod \"bd5bed34-d375-435c-b877-d836c8e314be\" (UID: \"bd5bed34-d375-435c-b877-d836c8e314be\") " Jan 22 12:17:04 crc kubenswrapper[4752]: I0122 12:17:04.612241 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd5bed34-d375-435c-b877-d836c8e314be-catalog-content\") pod \"bd5bed34-d375-435c-b877-d836c8e314be\" (UID: \"bd5bed34-d375-435c-b877-d836c8e314be\") " Jan 22 12:17:04 crc kubenswrapper[4752]: I0122 12:17:04.612748 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd5bed34-d375-435c-b877-d836c8e314be-utilities" (OuterVolumeSpecName: "utilities") pod "bd5bed34-d375-435c-b877-d836c8e314be" (UID: "bd5bed34-d375-435c-b877-d836c8e314be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:17:04 crc kubenswrapper[4752]: I0122 12:17:04.618508 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd5bed34-d375-435c-b877-d836c8e314be-kube-api-access-7tv99" (OuterVolumeSpecName: "kube-api-access-7tv99") pod "bd5bed34-d375-435c-b877-d836c8e314be" (UID: "bd5bed34-d375-435c-b877-d836c8e314be"). InnerVolumeSpecName "kube-api-access-7tv99". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:17:04 crc kubenswrapper[4752]: I0122 12:17:04.669421 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd5bed34-d375-435c-b877-d836c8e314be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd5bed34-d375-435c-b877-d836c8e314be" (UID: "bd5bed34-d375-435c-b877-d836c8e314be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:17:04 crc kubenswrapper[4752]: I0122 12:17:04.714338 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd5bed34-d375-435c-b877-d836c8e314be-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:17:04 crc kubenswrapper[4752]: I0122 12:17:04.714379 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tv99\" (UniqueName: \"kubernetes.io/projected/bd5bed34-d375-435c-b877-d836c8e314be-kube-api-access-7tv99\") on node \"crc\" DevicePath \"\"" Jan 22 12:17:04 crc kubenswrapper[4752]: I0122 12:17:04.714398 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd5bed34-d375-435c-b877-d836c8e314be-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:17:04 crc kubenswrapper[4752]: I0122 12:17:04.943943 4752 generic.go:334] "Generic (PLEG): container finished" podID="bd5bed34-d375-435c-b877-d836c8e314be" containerID="e0b111256650585b300d02eba78151654afdf10ccd428534237c8945cd73724e" exitCode=0 Jan 22 12:17:04 crc kubenswrapper[4752]: I0122 12:17:04.944018 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pj2tq" event={"ID":"bd5bed34-d375-435c-b877-d836c8e314be","Type":"ContainerDied","Data":"e0b111256650585b300d02eba78151654afdf10ccd428534237c8945cd73724e"} Jan 22 12:17:04 crc kubenswrapper[4752]: I0122 12:17:04.944170 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pj2tq" event={"ID":"bd5bed34-d375-435c-b877-d836c8e314be","Type":"ContainerDied","Data":"a51aabb0f06bd4a0cf03de784252436f0f3703ff6ef3d7c92588497952408525"} Jan 22 12:17:04 crc kubenswrapper[4752]: I0122 12:17:04.944218 4752 scope.go:117] "RemoveContainer" containerID="e0b111256650585b300d02eba78151654afdf10ccd428534237c8945cd73724e" Jan 22 12:17:04 crc kubenswrapper[4752]: I0122 12:17:04.944238 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pj2tq" Jan 22 12:17:04 crc kubenswrapper[4752]: I0122 12:17:04.980962 4752 scope.go:117] "RemoveContainer" containerID="8081dda8c89e67ef0751d30db7a6ec453e447dee376a62029e07a647946b5c7c" Jan 22 12:17:05 crc kubenswrapper[4752]: I0122 12:17:05.013050 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pj2tq"] Jan 22 12:17:05 crc kubenswrapper[4752]: I0122 12:17:05.024447 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pj2tq"] Jan 22 12:17:05 crc kubenswrapper[4752]: I0122 12:17:05.024494 4752 scope.go:117] "RemoveContainer" containerID="4bb02013d6b0167d6897c7a4ffca0e65e967118c20cf033f4725aaae5a4356a7" Jan 22 12:17:05 crc kubenswrapper[4752]: I0122 12:17:05.078286 4752 scope.go:117] "RemoveContainer" containerID="e0b111256650585b300d02eba78151654afdf10ccd428534237c8945cd73724e" Jan 22 12:17:05 crc kubenswrapper[4752]: E0122 12:17:05.078817 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0b111256650585b300d02eba78151654afdf10ccd428534237c8945cd73724e\": container with ID starting with e0b111256650585b300d02eba78151654afdf10ccd428534237c8945cd73724e not found: ID does not exist" containerID="e0b111256650585b300d02eba78151654afdf10ccd428534237c8945cd73724e" Jan 22 12:17:05 crc kubenswrapper[4752]: I0122 12:17:05.079666 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0b111256650585b300d02eba78151654afdf10ccd428534237c8945cd73724e"} err="failed to get container status \"e0b111256650585b300d02eba78151654afdf10ccd428534237c8945cd73724e\": rpc error: code = NotFound desc = could not find container \"e0b111256650585b300d02eba78151654afdf10ccd428534237c8945cd73724e\": container with ID starting with e0b111256650585b300d02eba78151654afdf10ccd428534237c8945cd73724e not found: ID does not exist" Jan 22 12:17:05 crc kubenswrapper[4752]: I0122 12:17:05.079753 4752 scope.go:117] "RemoveContainer" containerID="8081dda8c89e67ef0751d30db7a6ec453e447dee376a62029e07a647946b5c7c" Jan 22 12:17:05 crc kubenswrapper[4752]: E0122 12:17:05.080486 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8081dda8c89e67ef0751d30db7a6ec453e447dee376a62029e07a647946b5c7c\": container with ID starting with 8081dda8c89e67ef0751d30db7a6ec453e447dee376a62029e07a647946b5c7c not found: ID does not exist" containerID="8081dda8c89e67ef0751d30db7a6ec453e447dee376a62029e07a647946b5c7c" Jan 22 12:17:05 crc kubenswrapper[4752]: I0122 12:17:05.080660 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8081dda8c89e67ef0751d30db7a6ec453e447dee376a62029e07a647946b5c7c"} err="failed to get container status \"8081dda8c89e67ef0751d30db7a6ec453e447dee376a62029e07a647946b5c7c\": rpc error: code = NotFound desc = could not find container \"8081dda8c89e67ef0751d30db7a6ec453e447dee376a62029e07a647946b5c7c\": container with ID starting with 8081dda8c89e67ef0751d30db7a6ec453e447dee376a62029e07a647946b5c7c not found: ID does not exist" Jan 22 12:17:05 crc kubenswrapper[4752]: I0122 12:17:05.080740 4752 scope.go:117] "RemoveContainer" containerID="4bb02013d6b0167d6897c7a4ffca0e65e967118c20cf033f4725aaae5a4356a7" Jan 22 12:17:05 crc kubenswrapper[4752]: E0122 12:17:05.082455 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bb02013d6b0167d6897c7a4ffca0e65e967118c20cf033f4725aaae5a4356a7\": container with ID starting with 4bb02013d6b0167d6897c7a4ffca0e65e967118c20cf033f4725aaae5a4356a7 not found: ID does not exist" containerID="4bb02013d6b0167d6897c7a4ffca0e65e967118c20cf033f4725aaae5a4356a7" Jan 22 12:17:05 crc kubenswrapper[4752]: I0122 12:17:05.082500 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bb02013d6b0167d6897c7a4ffca0e65e967118c20cf033f4725aaae5a4356a7"} err="failed to get container status \"4bb02013d6b0167d6897c7a4ffca0e65e967118c20cf033f4725aaae5a4356a7\": rpc error: code = NotFound desc = could not find container \"4bb02013d6b0167d6897c7a4ffca0e65e967118c20cf033f4725aaae5a4356a7\": container with ID starting with 4bb02013d6b0167d6897c7a4ffca0e65e967118c20cf033f4725aaae5a4356a7 not found: ID does not exist" Jan 22 12:17:05 crc kubenswrapper[4752]: I0122 12:17:05.111390 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd5bed34-d375-435c-b877-d836c8e314be" path="/var/lib/kubelet/pods/bd5bed34-d375-435c-b877-d836c8e314be/volumes" Jan 22 12:17:27 crc kubenswrapper[4752]: I0122 12:17:27.724154 4752 patch_prober.go:28] interesting pod/machine-config-daemon-v6hm8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:17:27 crc kubenswrapper[4752]: I0122 12:17:27.724819 4752 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:17:27 crc kubenswrapper[4752]: I0122 12:17:27.724892 4752 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" Jan 22 12:17:27 crc kubenswrapper[4752]: I0122 12:17:27.725790 4752 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"722914d5b6484ced4833b41bf108d126b645f7169e2294629e0c22b5735c83fb"} pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:17:27 crc kubenswrapper[4752]: I0122 12:17:27.725850 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" containerName="machine-config-daemon" containerID="cri-o://722914d5b6484ced4833b41bf108d126b645f7169e2294629e0c22b5735c83fb" gracePeriod=600 Jan 22 12:17:27 crc kubenswrapper[4752]: E0122 12:17:27.849436 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:17:28 crc kubenswrapper[4752]: I0122 12:17:28.182508 4752 generic.go:334] "Generic (PLEG): container finished" podID="eb8df70c-9474-4827-8831-f39fc6883d79" containerID="722914d5b6484ced4833b41bf108d126b645f7169e2294629e0c22b5735c83fb" exitCode=0 Jan 22 12:17:28 crc kubenswrapper[4752]: I0122 12:17:28.182566 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerDied","Data":"722914d5b6484ced4833b41bf108d126b645f7169e2294629e0c22b5735c83fb"} Jan 22 12:17:28 crc kubenswrapper[4752]: I0122 12:17:28.182622 4752 scope.go:117] "RemoveContainer" containerID="98da58823f2068f6df4d900457c09373aba62fadfc66f786c1bd71ec543e45d3" Jan 22 12:17:28 crc kubenswrapper[4752]: I0122 12:17:28.183395 4752 scope.go:117] "RemoveContainer" containerID="722914d5b6484ced4833b41bf108d126b645f7169e2294629e0c22b5735c83fb" Jan 22 12:17:28 crc kubenswrapper[4752]: E0122 12:17:28.183732 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:17:40 crc kubenswrapper[4752]: I0122 12:17:40.098222 4752 scope.go:117] "RemoveContainer" containerID="722914d5b6484ced4833b41bf108d126b645f7169e2294629e0c22b5735c83fb" Jan 22 12:17:40 crc kubenswrapper[4752]: E0122 12:17:40.098880 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:17:53 crc kubenswrapper[4752]: I0122 12:17:53.098467 4752 scope.go:117] "RemoveContainer" containerID="722914d5b6484ced4833b41bf108d126b645f7169e2294629e0c22b5735c83fb" Jan 22 12:17:53 crc kubenswrapper[4752]: E0122 12:17:53.099258 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:18:06 crc kubenswrapper[4752]: I0122 12:18:06.098831 4752 scope.go:117] "RemoveContainer" containerID="722914d5b6484ced4833b41bf108d126b645f7169e2294629e0c22b5735c83fb" Jan 22 12:18:06 crc kubenswrapper[4752]: E0122 12:18:06.099711 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:18:18 crc kubenswrapper[4752]: I0122 12:18:18.097758 4752 scope.go:117] "RemoveContainer" containerID="722914d5b6484ced4833b41bf108d126b645f7169e2294629e0c22b5735c83fb" Jan 22 12:18:18 crc kubenswrapper[4752]: E0122 12:18:18.098732 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:18:32 crc kubenswrapper[4752]: I0122 12:18:32.098016 4752 scope.go:117] "RemoveContainer" containerID="722914d5b6484ced4833b41bf108d126b645f7169e2294629e0c22b5735c83fb" Jan 22 12:18:32 crc kubenswrapper[4752]: E0122 12:18:32.098710 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:18:43 crc kubenswrapper[4752]: I0122 12:18:43.097599 4752 scope.go:117] "RemoveContainer" containerID="722914d5b6484ced4833b41bf108d126b645f7169e2294629e0c22b5735c83fb" Jan 22 12:18:43 crc kubenswrapper[4752]: E0122 12:18:43.098474 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:18:55 crc kubenswrapper[4752]: I0122 12:18:55.097999 4752 scope.go:117] "RemoveContainer" containerID="722914d5b6484ced4833b41bf108d126b645f7169e2294629e0c22b5735c83fb" Jan 22 12:18:55 crc kubenswrapper[4752]: E0122 12:18:55.098785 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:19:07 crc kubenswrapper[4752]: I0122 12:19:07.098197 4752 scope.go:117] "RemoveContainer" containerID="722914d5b6484ced4833b41bf108d126b645f7169e2294629e0c22b5735c83fb" Jan 22 12:19:07 crc kubenswrapper[4752]: E0122 12:19:07.098971 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:19:22 crc kubenswrapper[4752]: I0122 12:19:22.098407 4752 scope.go:117] "RemoveContainer" containerID="722914d5b6484ced4833b41bf108d126b645f7169e2294629e0c22b5735c83fb" Jan 22 12:19:22 crc kubenswrapper[4752]: E0122 12:19:22.099304 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:19:36 crc kubenswrapper[4752]: I0122 12:19:36.098532 4752 scope.go:117] "RemoveContainer" containerID="722914d5b6484ced4833b41bf108d126b645f7169e2294629e0c22b5735c83fb" Jan 22 12:19:36 crc kubenswrapper[4752]: E0122 12:19:36.099599 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:19:48 crc kubenswrapper[4752]: I0122 12:19:48.098741 4752 scope.go:117] "RemoveContainer" containerID="722914d5b6484ced4833b41bf108d126b645f7169e2294629e0c22b5735c83fb" Jan 22 12:19:48 crc kubenswrapper[4752]: E0122 12:19:48.100279 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:19:57 crc kubenswrapper[4752]: I0122 12:19:57.623924 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-h9spj/must-gather-jhwkd"] Jan 22 12:19:57 crc kubenswrapper[4752]: E0122 12:19:57.624980 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5bed34-d375-435c-b877-d836c8e314be" containerName="extract-utilities" Jan 22 12:19:57 crc kubenswrapper[4752]: I0122 12:19:57.624998 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5bed34-d375-435c-b877-d836c8e314be" containerName="extract-utilities" Jan 22 12:19:57 crc kubenswrapper[4752]: E0122 12:19:57.625044 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5bed34-d375-435c-b877-d836c8e314be" containerName="extract-content" Jan 22 12:19:57 crc kubenswrapper[4752]: I0122 12:19:57.625052 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5bed34-d375-435c-b877-d836c8e314be" containerName="extract-content" Jan 22 12:19:57 crc kubenswrapper[4752]: E0122 12:19:57.625067 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5bed34-d375-435c-b877-d836c8e314be" containerName="registry-server" Jan 22 12:19:57 crc kubenswrapper[4752]: I0122 12:19:57.625072 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5bed34-d375-435c-b877-d836c8e314be" containerName="registry-server" Jan 22 12:19:57 crc kubenswrapper[4752]: I0122 12:19:57.625261 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5bed34-d375-435c-b877-d836c8e314be" containerName="registry-server" Jan 22 12:19:57 crc kubenswrapper[4752]: I0122 12:19:57.626433 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h9spj/must-gather-jhwkd" Jan 22 12:19:57 crc kubenswrapper[4752]: I0122 12:19:57.628500 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-h9spj"/"openshift-service-ca.crt" Jan 22 12:19:57 crc kubenswrapper[4752]: I0122 12:19:57.628672 4752 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-h9spj"/"default-dockercfg-7jb76" Jan 22 12:19:57 crc kubenswrapper[4752]: I0122 12:19:57.629583 4752 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-h9spj"/"kube-root-ca.crt" Jan 22 12:19:57 crc kubenswrapper[4752]: I0122 12:19:57.637735 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-h9spj/must-gather-jhwkd"] Jan 22 12:19:57 crc kubenswrapper[4752]: I0122 12:19:57.765254 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrktv\" (UniqueName: \"kubernetes.io/projected/8d26fbb6-bb98-47e7-bc2e-55d6f51bbee2-kube-api-access-zrktv\") pod \"must-gather-jhwkd\" (UID: \"8d26fbb6-bb98-47e7-bc2e-55d6f51bbee2\") " pod="openshift-must-gather-h9spj/must-gather-jhwkd" Jan 22 12:19:57 crc kubenswrapper[4752]: I0122 12:19:57.765323 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8d26fbb6-bb98-47e7-bc2e-55d6f51bbee2-must-gather-output\") pod \"must-gather-jhwkd\" (UID: \"8d26fbb6-bb98-47e7-bc2e-55d6f51bbee2\") " pod="openshift-must-gather-h9spj/must-gather-jhwkd" Jan 22 12:19:57 crc kubenswrapper[4752]: I0122 12:19:57.869536 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrktv\" (UniqueName: \"kubernetes.io/projected/8d26fbb6-bb98-47e7-bc2e-55d6f51bbee2-kube-api-access-zrktv\") pod \"must-gather-jhwkd\" (UID: \"8d26fbb6-bb98-47e7-bc2e-55d6f51bbee2\") " pod="openshift-must-gather-h9spj/must-gather-jhwkd" Jan 22 12:19:57 crc kubenswrapper[4752]: I0122 12:19:57.869628 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8d26fbb6-bb98-47e7-bc2e-55d6f51bbee2-must-gather-output\") pod \"must-gather-jhwkd\" (UID: \"8d26fbb6-bb98-47e7-bc2e-55d6f51bbee2\") " pod="openshift-must-gather-h9spj/must-gather-jhwkd" Jan 22 12:19:57 crc kubenswrapper[4752]: I0122 12:19:57.870337 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8d26fbb6-bb98-47e7-bc2e-55d6f51bbee2-must-gather-output\") pod \"must-gather-jhwkd\" (UID: \"8d26fbb6-bb98-47e7-bc2e-55d6f51bbee2\") " pod="openshift-must-gather-h9spj/must-gather-jhwkd" Jan 22 12:19:57 crc kubenswrapper[4752]: I0122 12:19:57.906581 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrktv\" (UniqueName: \"kubernetes.io/projected/8d26fbb6-bb98-47e7-bc2e-55d6f51bbee2-kube-api-access-zrktv\") pod \"must-gather-jhwkd\" (UID: \"8d26fbb6-bb98-47e7-bc2e-55d6f51bbee2\") " pod="openshift-must-gather-h9spj/must-gather-jhwkd" Jan 22 12:19:57 crc kubenswrapper[4752]: I0122 12:19:57.989657 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h9spj/must-gather-jhwkd" Jan 22 12:19:58 crc kubenswrapper[4752]: I0122 12:19:58.496156 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-h9spj/must-gather-jhwkd"] Jan 22 12:19:58 crc kubenswrapper[4752]: I0122 12:19:58.963878 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h9spj/must-gather-jhwkd" event={"ID":"8d26fbb6-bb98-47e7-bc2e-55d6f51bbee2","Type":"ContainerStarted","Data":"453962b02edd71b3171c88145b2a0c363db2461585d99ab71db7eb03f25f5549"} Jan 22 12:20:02 crc kubenswrapper[4752]: I0122 12:20:02.100584 4752 scope.go:117] "RemoveContainer" containerID="722914d5b6484ced4833b41bf108d126b645f7169e2294629e0c22b5735c83fb" Jan 22 12:20:02 crc kubenswrapper[4752]: E0122 12:20:02.102905 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:20:11 crc kubenswrapper[4752]: E0122 12:20:11.335599 4752 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-must-gather:latest" Jan 22 12:20:11 crc kubenswrapper[4752]: E0122 12:20:11.336596 4752 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 22 12:20:11 crc kubenswrapper[4752]: container &Container{Name:gather,Image:quay.io/openstack-k8s-operators/openstack-must-gather:latest,Command:[/bin/bash -c if command -v setsid >/dev/null 2>&1 && command -v ps >/dev/null 2>&1 && command -v pkill >/dev/null 2>&1; then Jan 22 12:20:11 crc kubenswrapper[4752]: HAVE_SESSION_TOOLS=true Jan 22 12:20:11 crc kubenswrapper[4752]: else Jan 22 12:20:11 crc kubenswrapper[4752]: HAVE_SESSION_TOOLS=false Jan 22 12:20:11 crc kubenswrapper[4752]: fi Jan 22 12:20:11 crc kubenswrapper[4752]: Jan 22 12:20:11 crc kubenswrapper[4752]: Jan 22 12:20:11 crc kubenswrapper[4752]: echo "[disk usage checker] Started" Jan 22 12:20:11 crc kubenswrapper[4752]: target_dir="/must-gather" Jan 22 12:20:11 crc kubenswrapper[4752]: usage_percentage_limit="80" Jan 22 12:20:11 crc kubenswrapper[4752]: while true; do Jan 22 12:20:11 crc kubenswrapper[4752]: usage_percentage=$(df -P "$target_dir" | awk 'NR==2 {print $5}' | sed 's/%//') Jan 22 12:20:11 crc kubenswrapper[4752]: echo "[disk usage checker] Volume usage percentage: current = ${usage_percentage} ; allowed = ${usage_percentage_limit}" Jan 22 12:20:11 crc kubenswrapper[4752]: if [ "$usage_percentage" -gt "$usage_percentage_limit" ]; then Jan 22 12:20:11 crc kubenswrapper[4752]: echo "[disk usage checker] Disk usage exceeds the volume percentage of ${usage_percentage_limit} for mounted directory, terminating..." Jan 22 12:20:11 crc kubenswrapper[4752]: if [ "$HAVE_SESSION_TOOLS" = "true" ]; then Jan 22 12:20:11 crc kubenswrapper[4752]: ps -o sess --no-headers | sort -u | while read sid; do Jan 22 12:20:11 crc kubenswrapper[4752]: [[ "$sid" -eq "${$}" ]] && continue Jan 22 12:20:11 crc kubenswrapper[4752]: pkill --signal SIGKILL --session "$sid" Jan 22 12:20:11 crc kubenswrapper[4752]: done Jan 22 12:20:11 crc kubenswrapper[4752]: else Jan 22 12:20:11 crc kubenswrapper[4752]: kill 0 Jan 22 12:20:11 crc kubenswrapper[4752]: fi Jan 22 12:20:11 crc kubenswrapper[4752]: exit 1 Jan 22 12:20:11 crc kubenswrapper[4752]: fi Jan 22 12:20:11 crc kubenswrapper[4752]: sleep 5 Jan 22 12:20:11 crc kubenswrapper[4752]: done & if [ "$HAVE_SESSION_TOOLS" = "true" ]; then Jan 22 12:20:11 crc kubenswrapper[4752]: setsid -w bash <<-MUSTGATHER_EOF Jan 22 12:20:11 crc kubenswrapper[4752]: ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=ALL SOS_EDPM=all SOS_DECOMPRESS=0 gather Jan 22 12:20:11 crc kubenswrapper[4752]: MUSTGATHER_EOF Jan 22 12:20:11 crc kubenswrapper[4752]: else Jan 22 12:20:11 crc kubenswrapper[4752]: ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=ALL SOS_EDPM=all SOS_DECOMPRESS=0 gather Jan 22 12:20:11 crc kubenswrapper[4752]: fi; sync && echo 'Caches written to disk'],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:must-gather-output,ReadOnly:false,MountPath:/must-gather,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zrktv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod must-gather-jhwkd_openshift-must-gather-h9spj(8d26fbb6-bb98-47e7-bc2e-55d6f51bbee2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Jan 22 12:20:11 crc kubenswrapper[4752]: > logger="UnhandledError" Jan 22 12:20:11 crc kubenswrapper[4752]: E0122 12:20:11.339114 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"gather\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"copy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\"]" pod="openshift-must-gather-h9spj/must-gather-jhwkd" podUID="8d26fbb6-bb98-47e7-bc2e-55d6f51bbee2" Jan 22 12:20:12 crc kubenswrapper[4752]: E0122 12:20:12.087783 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"gather\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\", failed to \"StartContainer\" for \"copy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\"]" pod="openshift-must-gather-h9spj/must-gather-jhwkd" podUID="8d26fbb6-bb98-47e7-bc2e-55d6f51bbee2" Jan 22 12:20:16 crc kubenswrapper[4752]: I0122 12:20:16.098201 4752 scope.go:117] "RemoveContainer" containerID="722914d5b6484ced4833b41bf108d126b645f7169e2294629e0c22b5735c83fb" Jan 22 12:20:16 crc kubenswrapper[4752]: E0122 12:20:16.099003 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:20:23 crc kubenswrapper[4752]: I0122 12:20:23.468393 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-h9spj/must-gather-jhwkd"] Jan 22 12:20:23 crc kubenswrapper[4752]: I0122 12:20:23.480503 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-h9spj/must-gather-jhwkd"] Jan 22 12:20:23 crc kubenswrapper[4752]: I0122 12:20:23.831757 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h9spj/must-gather-jhwkd" Jan 22 12:20:23 crc kubenswrapper[4752]: I0122 12:20:23.857995 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrktv\" (UniqueName: \"kubernetes.io/projected/8d26fbb6-bb98-47e7-bc2e-55d6f51bbee2-kube-api-access-zrktv\") pod \"8d26fbb6-bb98-47e7-bc2e-55d6f51bbee2\" (UID: \"8d26fbb6-bb98-47e7-bc2e-55d6f51bbee2\") " Jan 22 12:20:23 crc kubenswrapper[4752]: I0122 12:20:23.858077 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8d26fbb6-bb98-47e7-bc2e-55d6f51bbee2-must-gather-output\") pod \"8d26fbb6-bb98-47e7-bc2e-55d6f51bbee2\" (UID: \"8d26fbb6-bb98-47e7-bc2e-55d6f51bbee2\") " Jan 22 12:20:23 crc kubenswrapper[4752]: I0122 12:20:23.858462 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d26fbb6-bb98-47e7-bc2e-55d6f51bbee2-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "8d26fbb6-bb98-47e7-bc2e-55d6f51bbee2" (UID: "8d26fbb6-bb98-47e7-bc2e-55d6f51bbee2"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:20:23 crc kubenswrapper[4752]: I0122 12:20:23.858764 4752 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8d26fbb6-bb98-47e7-bc2e-55d6f51bbee2-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 22 12:20:23 crc kubenswrapper[4752]: I0122 12:20:23.864574 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d26fbb6-bb98-47e7-bc2e-55d6f51bbee2-kube-api-access-zrktv" (OuterVolumeSpecName: "kube-api-access-zrktv") pod "8d26fbb6-bb98-47e7-bc2e-55d6f51bbee2" (UID: "8d26fbb6-bb98-47e7-bc2e-55d6f51bbee2"). InnerVolumeSpecName "kube-api-access-zrktv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:20:23 crc kubenswrapper[4752]: I0122 12:20:23.960834 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrktv\" (UniqueName: \"kubernetes.io/projected/8d26fbb6-bb98-47e7-bc2e-55d6f51bbee2-kube-api-access-zrktv\") on node \"crc\" DevicePath \"\"" Jan 22 12:20:24 crc kubenswrapper[4752]: I0122 12:20:24.195936 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h9spj/must-gather-jhwkd" Jan 22 12:20:25 crc kubenswrapper[4752]: I0122 12:20:25.115283 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d26fbb6-bb98-47e7-bc2e-55d6f51bbee2" path="/var/lib/kubelet/pods/8d26fbb6-bb98-47e7-bc2e-55d6f51bbee2/volumes" Jan 22 12:20:29 crc kubenswrapper[4752]: I0122 12:20:29.099564 4752 scope.go:117] "RemoveContainer" containerID="722914d5b6484ced4833b41bf108d126b645f7169e2294629e0c22b5735c83fb" Jan 22 12:20:29 crc kubenswrapper[4752]: E0122 12:20:29.100327 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:20:44 crc kubenswrapper[4752]: I0122 12:20:44.097955 4752 scope.go:117] "RemoveContainer" containerID="722914d5b6484ced4833b41bf108d126b645f7169e2294629e0c22b5735c83fb" Jan 22 12:20:44 crc kubenswrapper[4752]: E0122 12:20:44.098784 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:20:57 crc kubenswrapper[4752]: I0122 12:20:57.098455 4752 scope.go:117] "RemoveContainer" containerID="722914d5b6484ced4833b41bf108d126b645f7169e2294629e0c22b5735c83fb" Jan 22 12:20:57 crc kubenswrapper[4752]: E0122 12:20:57.099260 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:21:12 crc kubenswrapper[4752]: I0122 12:21:12.098516 4752 scope.go:117] "RemoveContainer" containerID="722914d5b6484ced4833b41bf108d126b645f7169e2294629e0c22b5735c83fb" Jan 22 12:21:12 crc kubenswrapper[4752]: E0122 12:21:12.100573 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:21:27 crc kubenswrapper[4752]: I0122 12:21:27.098363 4752 scope.go:117] "RemoveContainer" containerID="722914d5b6484ced4833b41bf108d126b645f7169e2294629e0c22b5735c83fb" Jan 22 12:21:27 crc kubenswrapper[4752]: E0122 12:21:27.099368 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:21:42 crc kubenswrapper[4752]: I0122 12:21:42.098198 4752 scope.go:117] "RemoveContainer" containerID="722914d5b6484ced4833b41bf108d126b645f7169e2294629e0c22b5735c83fb" Jan 22 12:21:42 crc kubenswrapper[4752]: E0122 12:21:42.099035 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:21:50 crc kubenswrapper[4752]: I0122 12:21:50.629235 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hmcg7"] Jan 22 12:21:50 crc kubenswrapper[4752]: I0122 12:21:50.634219 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hmcg7" Jan 22 12:21:50 crc kubenswrapper[4752]: I0122 12:21:50.639993 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hmcg7"] Jan 22 12:21:50 crc kubenswrapper[4752]: I0122 12:21:50.780438 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/549b9519-2fb8-4974-bdac-9d86be54c0ad-utilities\") pod \"certified-operators-hmcg7\" (UID: \"549b9519-2fb8-4974-bdac-9d86be54c0ad\") " pod="openshift-marketplace/certified-operators-hmcg7" Jan 22 12:21:50 crc kubenswrapper[4752]: I0122 12:21:50.780662 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/549b9519-2fb8-4974-bdac-9d86be54c0ad-catalog-content\") pod \"certified-operators-hmcg7\" (UID: \"549b9519-2fb8-4974-bdac-9d86be54c0ad\") " pod="openshift-marketplace/certified-operators-hmcg7" Jan 22 12:21:50 crc kubenswrapper[4752]: I0122 12:21:50.780779 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg5fc\" (UniqueName: \"kubernetes.io/projected/549b9519-2fb8-4974-bdac-9d86be54c0ad-kube-api-access-pg5fc\") pod \"certified-operators-hmcg7\" (UID: \"549b9519-2fb8-4974-bdac-9d86be54c0ad\") " pod="openshift-marketplace/certified-operators-hmcg7" Jan 22 12:21:50 crc kubenswrapper[4752]: I0122 12:21:50.882584 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/549b9519-2fb8-4974-bdac-9d86be54c0ad-utilities\") pod \"certified-operators-hmcg7\" (UID: \"549b9519-2fb8-4974-bdac-9d86be54c0ad\") " pod="openshift-marketplace/certified-operators-hmcg7" Jan 22 12:21:50 crc kubenswrapper[4752]: I0122 12:21:50.882665 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/549b9519-2fb8-4974-bdac-9d86be54c0ad-catalog-content\") pod \"certified-operators-hmcg7\" (UID: \"549b9519-2fb8-4974-bdac-9d86be54c0ad\") " pod="openshift-marketplace/certified-operators-hmcg7" Jan 22 12:21:50 crc kubenswrapper[4752]: I0122 12:21:50.882700 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg5fc\" (UniqueName: \"kubernetes.io/projected/549b9519-2fb8-4974-bdac-9d86be54c0ad-kube-api-access-pg5fc\") pod \"certified-operators-hmcg7\" (UID: \"549b9519-2fb8-4974-bdac-9d86be54c0ad\") " pod="openshift-marketplace/certified-operators-hmcg7" Jan 22 12:21:50 crc kubenswrapper[4752]: I0122 12:21:50.883230 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/549b9519-2fb8-4974-bdac-9d86be54c0ad-utilities\") pod \"certified-operators-hmcg7\" (UID: \"549b9519-2fb8-4974-bdac-9d86be54c0ad\") " pod="openshift-marketplace/certified-operators-hmcg7" Jan 22 12:21:50 crc kubenswrapper[4752]: I0122 12:21:50.883269 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/549b9519-2fb8-4974-bdac-9d86be54c0ad-catalog-content\") pod \"certified-operators-hmcg7\" (UID: \"549b9519-2fb8-4974-bdac-9d86be54c0ad\") " pod="openshift-marketplace/certified-operators-hmcg7" Jan 22 12:21:50 crc kubenswrapper[4752]: I0122 12:21:50.908166 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg5fc\" (UniqueName: \"kubernetes.io/projected/549b9519-2fb8-4974-bdac-9d86be54c0ad-kube-api-access-pg5fc\") pod \"certified-operators-hmcg7\" (UID: \"549b9519-2fb8-4974-bdac-9d86be54c0ad\") " pod="openshift-marketplace/certified-operators-hmcg7" Jan 22 12:21:51 crc kubenswrapper[4752]: I0122 12:21:51.002348 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hmcg7" Jan 22 12:21:51 crc kubenswrapper[4752]: I0122 12:21:51.638959 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hmcg7"] Jan 22 12:21:52 crc kubenswrapper[4752]: I0122 12:21:52.107374 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hmcg7" event={"ID":"549b9519-2fb8-4974-bdac-9d86be54c0ad","Type":"ContainerStarted","Data":"dafb751c93b2a8664870f9038f2864922a0c639a9757eed16edf1ecdf885d05c"} Jan 22 12:21:53 crc kubenswrapper[4752]: I0122 12:21:53.129985 4752 generic.go:334] "Generic (PLEG): container finished" podID="549b9519-2fb8-4974-bdac-9d86be54c0ad" containerID="6d37d41aa37f916a21072b0de7f01c468dbf7afca0795b693f214914a0ce3fa1" exitCode=0 Jan 22 12:21:53 crc kubenswrapper[4752]: I0122 12:21:53.130061 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hmcg7" event={"ID":"549b9519-2fb8-4974-bdac-9d86be54c0ad","Type":"ContainerDied","Data":"6d37d41aa37f916a21072b0de7f01c468dbf7afca0795b693f214914a0ce3fa1"} Jan 22 12:21:53 crc kubenswrapper[4752]: I0122 12:21:53.133634 4752 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 12:21:54 crc kubenswrapper[4752]: I0122 12:21:54.144768 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hmcg7" event={"ID":"549b9519-2fb8-4974-bdac-9d86be54c0ad","Type":"ContainerStarted","Data":"56ba42a27e2a628f3eee6d41ba4c96b99a9d053e38ed1612f530ee602aaa6daf"} Jan 22 12:21:55 crc kubenswrapper[4752]: I0122 12:21:55.098723 4752 scope.go:117] "RemoveContainer" containerID="722914d5b6484ced4833b41bf108d126b645f7169e2294629e0c22b5735c83fb" Jan 22 12:21:55 crc kubenswrapper[4752]: E0122 12:21:55.099751 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:21:56 crc kubenswrapper[4752]: I0122 12:21:56.172178 4752 generic.go:334] "Generic (PLEG): container finished" podID="549b9519-2fb8-4974-bdac-9d86be54c0ad" containerID="56ba42a27e2a628f3eee6d41ba4c96b99a9d053e38ed1612f530ee602aaa6daf" exitCode=0 Jan 22 12:21:56 crc kubenswrapper[4752]: I0122 12:21:56.172234 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hmcg7" event={"ID":"549b9519-2fb8-4974-bdac-9d86be54c0ad","Type":"ContainerDied","Data":"56ba42a27e2a628f3eee6d41ba4c96b99a9d053e38ed1612f530ee602aaa6daf"} Jan 22 12:21:57 crc kubenswrapper[4752]: I0122 12:21:57.191576 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hmcg7" event={"ID":"549b9519-2fb8-4974-bdac-9d86be54c0ad","Type":"ContainerStarted","Data":"5b3d59c3573f8510cbae89b19c3cd039ccd9a70e98e18b75b2473ce9b35da57b"} Jan 22 12:21:57 crc kubenswrapper[4752]: I0122 12:21:57.224640 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hmcg7" podStartSLOduration=3.747655955 podStartE2EDuration="7.224603061s" podCreationTimestamp="2026-01-22 12:21:50 +0000 UTC" firstStartedPulling="2026-01-22 12:21:53.133119715 +0000 UTC m=+6992.363062663" lastFinishedPulling="2026-01-22 12:21:56.610066861 +0000 UTC m=+6995.840009769" observedRunningTime="2026-01-22 12:21:57.209941127 +0000 UTC m=+6996.439884045" watchObservedRunningTime="2026-01-22 12:21:57.224603061 +0000 UTC m=+6996.454545969" Jan 22 12:22:01 crc kubenswrapper[4752]: I0122 12:22:01.002761 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hmcg7" Jan 22 12:22:01 crc kubenswrapper[4752]: I0122 12:22:01.004176 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hmcg7" Jan 22 12:22:01 crc kubenswrapper[4752]: I0122 12:22:01.049511 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hmcg7" Jan 22 12:22:01 crc kubenswrapper[4752]: I0122 12:22:01.305411 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hmcg7" Jan 22 12:22:01 crc kubenswrapper[4752]: I0122 12:22:01.405482 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hmcg7"] Jan 22 12:22:03 crc kubenswrapper[4752]: I0122 12:22:03.250704 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hmcg7" podUID="549b9519-2fb8-4974-bdac-9d86be54c0ad" containerName="registry-server" containerID="cri-o://5b3d59c3573f8510cbae89b19c3cd039ccd9a70e98e18b75b2473ce9b35da57b" gracePeriod=2 Jan 22 12:22:03 crc kubenswrapper[4752]: I0122 12:22:03.782270 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hmcg7" Jan 22 12:22:03 crc kubenswrapper[4752]: I0122 12:22:03.882176 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/549b9519-2fb8-4974-bdac-9d86be54c0ad-catalog-content\") pod \"549b9519-2fb8-4974-bdac-9d86be54c0ad\" (UID: \"549b9519-2fb8-4974-bdac-9d86be54c0ad\") " Jan 22 12:22:03 crc kubenswrapper[4752]: I0122 12:22:03.883371 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pg5fc\" (UniqueName: \"kubernetes.io/projected/549b9519-2fb8-4974-bdac-9d86be54c0ad-kube-api-access-pg5fc\") pod \"549b9519-2fb8-4974-bdac-9d86be54c0ad\" (UID: \"549b9519-2fb8-4974-bdac-9d86be54c0ad\") " Jan 22 12:22:03 crc kubenswrapper[4752]: I0122 12:22:03.883515 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/549b9519-2fb8-4974-bdac-9d86be54c0ad-utilities\") pod \"549b9519-2fb8-4974-bdac-9d86be54c0ad\" (UID: \"549b9519-2fb8-4974-bdac-9d86be54c0ad\") " Jan 22 12:22:03 crc kubenswrapper[4752]: I0122 12:22:03.884777 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/549b9519-2fb8-4974-bdac-9d86be54c0ad-utilities" (OuterVolumeSpecName: "utilities") pod "549b9519-2fb8-4974-bdac-9d86be54c0ad" (UID: "549b9519-2fb8-4974-bdac-9d86be54c0ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:22:03 crc kubenswrapper[4752]: I0122 12:22:03.888231 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/549b9519-2fb8-4974-bdac-9d86be54c0ad-kube-api-access-pg5fc" (OuterVolumeSpecName: "kube-api-access-pg5fc") pod "549b9519-2fb8-4974-bdac-9d86be54c0ad" (UID: "549b9519-2fb8-4974-bdac-9d86be54c0ad"). InnerVolumeSpecName "kube-api-access-pg5fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:22:03 crc kubenswrapper[4752]: I0122 12:22:03.932633 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/549b9519-2fb8-4974-bdac-9d86be54c0ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "549b9519-2fb8-4974-bdac-9d86be54c0ad" (UID: "549b9519-2fb8-4974-bdac-9d86be54c0ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:22:03 crc kubenswrapper[4752]: I0122 12:22:03.986309 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pg5fc\" (UniqueName: \"kubernetes.io/projected/549b9519-2fb8-4974-bdac-9d86be54c0ad-kube-api-access-pg5fc\") on node \"crc\" DevicePath \"\"" Jan 22 12:22:03 crc kubenswrapper[4752]: I0122 12:22:03.986343 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/549b9519-2fb8-4974-bdac-9d86be54c0ad-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:22:03 crc kubenswrapper[4752]: I0122 12:22:03.986352 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/549b9519-2fb8-4974-bdac-9d86be54c0ad-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:22:04 crc kubenswrapper[4752]: I0122 12:22:04.260502 4752 generic.go:334] "Generic (PLEG): container finished" podID="549b9519-2fb8-4974-bdac-9d86be54c0ad" containerID="5b3d59c3573f8510cbae89b19c3cd039ccd9a70e98e18b75b2473ce9b35da57b" exitCode=0 Jan 22 12:22:04 crc kubenswrapper[4752]: I0122 12:22:04.260564 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hmcg7" Jan 22 12:22:04 crc kubenswrapper[4752]: I0122 12:22:04.260562 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hmcg7" event={"ID":"549b9519-2fb8-4974-bdac-9d86be54c0ad","Type":"ContainerDied","Data":"5b3d59c3573f8510cbae89b19c3cd039ccd9a70e98e18b75b2473ce9b35da57b"} Jan 22 12:22:04 crc kubenswrapper[4752]: I0122 12:22:04.261033 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hmcg7" event={"ID":"549b9519-2fb8-4974-bdac-9d86be54c0ad","Type":"ContainerDied","Data":"dafb751c93b2a8664870f9038f2864922a0c639a9757eed16edf1ecdf885d05c"} Jan 22 12:22:04 crc kubenswrapper[4752]: I0122 12:22:04.261060 4752 scope.go:117] "RemoveContainer" containerID="5b3d59c3573f8510cbae89b19c3cd039ccd9a70e98e18b75b2473ce9b35da57b" Jan 22 12:22:04 crc kubenswrapper[4752]: I0122 12:22:04.293702 4752 scope.go:117] "RemoveContainer" containerID="56ba42a27e2a628f3eee6d41ba4c96b99a9d053e38ed1612f530ee602aaa6daf" Jan 22 12:22:04 crc kubenswrapper[4752]: I0122 12:22:04.299050 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hmcg7"] Jan 22 12:22:04 crc kubenswrapper[4752]: I0122 12:22:04.308297 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hmcg7"] Jan 22 12:22:04 crc kubenswrapper[4752]: I0122 12:22:04.343071 4752 scope.go:117] "RemoveContainer" containerID="6d37d41aa37f916a21072b0de7f01c468dbf7afca0795b693f214914a0ce3fa1" Jan 22 12:22:04 crc kubenswrapper[4752]: I0122 12:22:04.378345 4752 scope.go:117] "RemoveContainer" containerID="5b3d59c3573f8510cbae89b19c3cd039ccd9a70e98e18b75b2473ce9b35da57b" Jan 22 12:22:04 crc kubenswrapper[4752]: E0122 12:22:04.379257 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b3d59c3573f8510cbae89b19c3cd039ccd9a70e98e18b75b2473ce9b35da57b\": container with ID starting with 5b3d59c3573f8510cbae89b19c3cd039ccd9a70e98e18b75b2473ce9b35da57b not found: ID does not exist" containerID="5b3d59c3573f8510cbae89b19c3cd039ccd9a70e98e18b75b2473ce9b35da57b" Jan 22 12:22:04 crc kubenswrapper[4752]: I0122 12:22:04.379301 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b3d59c3573f8510cbae89b19c3cd039ccd9a70e98e18b75b2473ce9b35da57b"} err="failed to get container status \"5b3d59c3573f8510cbae89b19c3cd039ccd9a70e98e18b75b2473ce9b35da57b\": rpc error: code = NotFound desc = could not find container \"5b3d59c3573f8510cbae89b19c3cd039ccd9a70e98e18b75b2473ce9b35da57b\": container with ID starting with 5b3d59c3573f8510cbae89b19c3cd039ccd9a70e98e18b75b2473ce9b35da57b not found: ID does not exist" Jan 22 12:22:04 crc kubenswrapper[4752]: I0122 12:22:04.379328 4752 scope.go:117] "RemoveContainer" containerID="56ba42a27e2a628f3eee6d41ba4c96b99a9d053e38ed1612f530ee602aaa6daf" Jan 22 12:22:04 crc kubenswrapper[4752]: E0122 12:22:04.379667 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56ba42a27e2a628f3eee6d41ba4c96b99a9d053e38ed1612f530ee602aaa6daf\": container with ID starting with 56ba42a27e2a628f3eee6d41ba4c96b99a9d053e38ed1612f530ee602aaa6daf not found: ID does not exist" containerID="56ba42a27e2a628f3eee6d41ba4c96b99a9d053e38ed1612f530ee602aaa6daf" Jan 22 12:22:04 crc kubenswrapper[4752]: I0122 12:22:04.379689 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56ba42a27e2a628f3eee6d41ba4c96b99a9d053e38ed1612f530ee602aaa6daf"} err="failed to get container status \"56ba42a27e2a628f3eee6d41ba4c96b99a9d053e38ed1612f530ee602aaa6daf\": rpc error: code = NotFound desc = could not find container \"56ba42a27e2a628f3eee6d41ba4c96b99a9d053e38ed1612f530ee602aaa6daf\": container with ID starting with 56ba42a27e2a628f3eee6d41ba4c96b99a9d053e38ed1612f530ee602aaa6daf not found: ID does not exist" Jan 22 12:22:04 crc kubenswrapper[4752]: I0122 12:22:04.379702 4752 scope.go:117] "RemoveContainer" containerID="6d37d41aa37f916a21072b0de7f01c468dbf7afca0795b693f214914a0ce3fa1" Jan 22 12:22:04 crc kubenswrapper[4752]: E0122 12:22:04.379958 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d37d41aa37f916a21072b0de7f01c468dbf7afca0795b693f214914a0ce3fa1\": container with ID starting with 6d37d41aa37f916a21072b0de7f01c468dbf7afca0795b693f214914a0ce3fa1 not found: ID does not exist" containerID="6d37d41aa37f916a21072b0de7f01c468dbf7afca0795b693f214914a0ce3fa1" Jan 22 12:22:04 crc kubenswrapper[4752]: I0122 12:22:04.379977 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d37d41aa37f916a21072b0de7f01c468dbf7afca0795b693f214914a0ce3fa1"} err="failed to get container status \"6d37d41aa37f916a21072b0de7f01c468dbf7afca0795b693f214914a0ce3fa1\": rpc error: code = NotFound desc = could not find container \"6d37d41aa37f916a21072b0de7f01c468dbf7afca0795b693f214914a0ce3fa1\": container with ID starting with 6d37d41aa37f916a21072b0de7f01c468dbf7afca0795b693f214914a0ce3fa1 not found: ID does not exist" Jan 22 12:22:05 crc kubenswrapper[4752]: I0122 12:22:05.117591 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="549b9519-2fb8-4974-bdac-9d86be54c0ad" path="/var/lib/kubelet/pods/549b9519-2fb8-4974-bdac-9d86be54c0ad/volumes" Jan 22 12:22:06 crc kubenswrapper[4752]: I0122 12:22:06.899827 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4sv5z"] Jan 22 12:22:06 crc kubenswrapper[4752]: E0122 12:22:06.902910 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="549b9519-2fb8-4974-bdac-9d86be54c0ad" containerName="extract-utilities" Jan 22 12:22:06 crc kubenswrapper[4752]: I0122 12:22:06.903038 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="549b9519-2fb8-4974-bdac-9d86be54c0ad" containerName="extract-utilities" Jan 22 12:22:06 crc kubenswrapper[4752]: E0122 12:22:06.903139 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="549b9519-2fb8-4974-bdac-9d86be54c0ad" containerName="extract-content" Jan 22 12:22:06 crc kubenswrapper[4752]: I0122 12:22:06.903219 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="549b9519-2fb8-4974-bdac-9d86be54c0ad" containerName="extract-content" Jan 22 12:22:06 crc kubenswrapper[4752]: E0122 12:22:06.903306 4752 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="549b9519-2fb8-4974-bdac-9d86be54c0ad" containerName="registry-server" Jan 22 12:22:06 crc kubenswrapper[4752]: I0122 12:22:06.903389 4752 state_mem.go:107] "Deleted CPUSet assignment" podUID="549b9519-2fb8-4974-bdac-9d86be54c0ad" containerName="registry-server" Jan 22 12:22:06 crc kubenswrapper[4752]: I0122 12:22:06.903714 4752 memory_manager.go:354] "RemoveStaleState removing state" podUID="549b9519-2fb8-4974-bdac-9d86be54c0ad" containerName="registry-server" Jan 22 12:22:06 crc kubenswrapper[4752]: I0122 12:22:06.905757 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4sv5z" Jan 22 12:22:06 crc kubenswrapper[4752]: I0122 12:22:06.916120 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sv5z"] Jan 22 12:22:06 crc kubenswrapper[4752]: I0122 12:22:06.954467 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e079238e-454f-435a-86ba-d06c27c70d0d-utilities\") pod \"redhat-marketplace-4sv5z\" (UID: \"e079238e-454f-435a-86ba-d06c27c70d0d\") " pod="openshift-marketplace/redhat-marketplace-4sv5z" Jan 22 12:22:06 crc kubenswrapper[4752]: I0122 12:22:06.954512 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e079238e-454f-435a-86ba-d06c27c70d0d-catalog-content\") pod \"redhat-marketplace-4sv5z\" (UID: \"e079238e-454f-435a-86ba-d06c27c70d0d\") " pod="openshift-marketplace/redhat-marketplace-4sv5z" Jan 22 12:22:06 crc kubenswrapper[4752]: I0122 12:22:06.954587 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p29x2\" (UniqueName: \"kubernetes.io/projected/e079238e-454f-435a-86ba-d06c27c70d0d-kube-api-access-p29x2\") pod \"redhat-marketplace-4sv5z\" (UID: \"e079238e-454f-435a-86ba-d06c27c70d0d\") " pod="openshift-marketplace/redhat-marketplace-4sv5z" Jan 22 12:22:07 crc kubenswrapper[4752]: I0122 12:22:07.056464 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e079238e-454f-435a-86ba-d06c27c70d0d-utilities\") pod \"redhat-marketplace-4sv5z\" (UID: \"e079238e-454f-435a-86ba-d06c27c70d0d\") " pod="openshift-marketplace/redhat-marketplace-4sv5z" Jan 22 12:22:07 crc kubenswrapper[4752]: I0122 12:22:07.056514 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e079238e-454f-435a-86ba-d06c27c70d0d-catalog-content\") pod \"redhat-marketplace-4sv5z\" (UID: \"e079238e-454f-435a-86ba-d06c27c70d0d\") " pod="openshift-marketplace/redhat-marketplace-4sv5z" Jan 22 12:22:07 crc kubenswrapper[4752]: I0122 12:22:07.056591 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p29x2\" (UniqueName: \"kubernetes.io/projected/e079238e-454f-435a-86ba-d06c27c70d0d-kube-api-access-p29x2\") pod \"redhat-marketplace-4sv5z\" (UID: \"e079238e-454f-435a-86ba-d06c27c70d0d\") " pod="openshift-marketplace/redhat-marketplace-4sv5z" Jan 22 12:22:07 crc kubenswrapper[4752]: I0122 12:22:07.057376 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e079238e-454f-435a-86ba-d06c27c70d0d-utilities\") pod \"redhat-marketplace-4sv5z\" (UID: \"e079238e-454f-435a-86ba-d06c27c70d0d\") " pod="openshift-marketplace/redhat-marketplace-4sv5z" Jan 22 12:22:07 crc kubenswrapper[4752]: I0122 12:22:07.057579 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e079238e-454f-435a-86ba-d06c27c70d0d-catalog-content\") pod \"redhat-marketplace-4sv5z\" (UID: \"e079238e-454f-435a-86ba-d06c27c70d0d\") " pod="openshift-marketplace/redhat-marketplace-4sv5z" Jan 22 12:22:07 crc kubenswrapper[4752]: I0122 12:22:07.075201 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p29x2\" (UniqueName: \"kubernetes.io/projected/e079238e-454f-435a-86ba-d06c27c70d0d-kube-api-access-p29x2\") pod \"redhat-marketplace-4sv5z\" (UID: \"e079238e-454f-435a-86ba-d06c27c70d0d\") " pod="openshift-marketplace/redhat-marketplace-4sv5z" Jan 22 12:22:07 crc kubenswrapper[4752]: I0122 12:22:07.263102 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4sv5z" Jan 22 12:22:07 crc kubenswrapper[4752]: I0122 12:22:07.788996 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sv5z"] Jan 22 12:22:08 crc kubenswrapper[4752]: I0122 12:22:08.306944 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sv5z" event={"ID":"e079238e-454f-435a-86ba-d06c27c70d0d","Type":"ContainerDied","Data":"630b66a0ca8abb16a92580e279f77a939ca36a5cb36d54396ce7a5b25f0f5234"} Jan 22 12:22:08 crc kubenswrapper[4752]: I0122 12:22:08.308053 4752 generic.go:334] "Generic (PLEG): container finished" podID="e079238e-454f-435a-86ba-d06c27c70d0d" containerID="630b66a0ca8abb16a92580e279f77a939ca36a5cb36d54396ce7a5b25f0f5234" exitCode=0 Jan 22 12:22:08 crc kubenswrapper[4752]: I0122 12:22:08.309532 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sv5z" event={"ID":"e079238e-454f-435a-86ba-d06c27c70d0d","Type":"ContainerStarted","Data":"f062afce83ffaa58214629bba6a8de38c22a307d493eb460abd18a1b9ff12c42"} Jan 22 12:22:08 crc kubenswrapper[4752]: I0122 12:22:08.700574 4752 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2nj8c"] Jan 22 12:22:08 crc kubenswrapper[4752]: I0122 12:22:08.703407 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2nj8c" Jan 22 12:22:08 crc kubenswrapper[4752]: I0122 12:22:08.709244 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2nj8c"] Jan 22 12:22:08 crc kubenswrapper[4752]: I0122 12:22:08.797270 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d960903-8470-4efb-84be-5fc5ada5fba2-catalog-content\") pod \"redhat-operators-2nj8c\" (UID: \"9d960903-8470-4efb-84be-5fc5ada5fba2\") " pod="openshift-marketplace/redhat-operators-2nj8c" Jan 22 12:22:08 crc kubenswrapper[4752]: I0122 12:22:08.797450 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjn9b\" (UniqueName: \"kubernetes.io/projected/9d960903-8470-4efb-84be-5fc5ada5fba2-kube-api-access-sjn9b\") pod \"redhat-operators-2nj8c\" (UID: \"9d960903-8470-4efb-84be-5fc5ada5fba2\") " pod="openshift-marketplace/redhat-operators-2nj8c" Jan 22 12:22:08 crc kubenswrapper[4752]: I0122 12:22:08.797523 4752 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d960903-8470-4efb-84be-5fc5ada5fba2-utilities\") pod \"redhat-operators-2nj8c\" (UID: \"9d960903-8470-4efb-84be-5fc5ada5fba2\") " pod="openshift-marketplace/redhat-operators-2nj8c" Jan 22 12:22:08 crc kubenswrapper[4752]: I0122 12:22:08.900051 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjn9b\" (UniqueName: \"kubernetes.io/projected/9d960903-8470-4efb-84be-5fc5ada5fba2-kube-api-access-sjn9b\") pod \"redhat-operators-2nj8c\" (UID: \"9d960903-8470-4efb-84be-5fc5ada5fba2\") " pod="openshift-marketplace/redhat-operators-2nj8c" Jan 22 12:22:08 crc kubenswrapper[4752]: I0122 12:22:08.900125 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d960903-8470-4efb-84be-5fc5ada5fba2-utilities\") pod \"redhat-operators-2nj8c\" (UID: \"9d960903-8470-4efb-84be-5fc5ada5fba2\") " pod="openshift-marketplace/redhat-operators-2nj8c" Jan 22 12:22:08 crc kubenswrapper[4752]: I0122 12:22:08.900285 4752 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d960903-8470-4efb-84be-5fc5ada5fba2-catalog-content\") pod \"redhat-operators-2nj8c\" (UID: \"9d960903-8470-4efb-84be-5fc5ada5fba2\") " pod="openshift-marketplace/redhat-operators-2nj8c" Jan 22 12:22:08 crc kubenswrapper[4752]: I0122 12:22:08.900828 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d960903-8470-4efb-84be-5fc5ada5fba2-catalog-content\") pod \"redhat-operators-2nj8c\" (UID: \"9d960903-8470-4efb-84be-5fc5ada5fba2\") " pod="openshift-marketplace/redhat-operators-2nj8c" Jan 22 12:22:08 crc kubenswrapper[4752]: I0122 12:22:08.901399 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d960903-8470-4efb-84be-5fc5ada5fba2-utilities\") pod \"redhat-operators-2nj8c\" (UID: \"9d960903-8470-4efb-84be-5fc5ada5fba2\") " pod="openshift-marketplace/redhat-operators-2nj8c" Jan 22 12:22:08 crc kubenswrapper[4752]: I0122 12:22:08.921081 4752 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjn9b\" (UniqueName: \"kubernetes.io/projected/9d960903-8470-4efb-84be-5fc5ada5fba2-kube-api-access-sjn9b\") pod \"redhat-operators-2nj8c\" (UID: \"9d960903-8470-4efb-84be-5fc5ada5fba2\") " pod="openshift-marketplace/redhat-operators-2nj8c" Jan 22 12:22:09 crc kubenswrapper[4752]: I0122 12:22:09.026796 4752 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2nj8c" Jan 22 12:22:09 crc kubenswrapper[4752]: I0122 12:22:09.342215 4752 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2nj8c"] Jan 22 12:22:09 crc kubenswrapper[4752]: I0122 12:22:09.349345 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sv5z" event={"ID":"e079238e-454f-435a-86ba-d06c27c70d0d","Type":"ContainerStarted","Data":"14a0eb760026777f4d4ac830071e7e1afc1ad3a9c4a7f2d2e4cff2a09d7572f5"} Jan 22 12:22:10 crc kubenswrapper[4752]: I0122 12:22:10.098965 4752 scope.go:117] "RemoveContainer" containerID="722914d5b6484ced4833b41bf108d126b645f7169e2294629e0c22b5735c83fb" Jan 22 12:22:10 crc kubenswrapper[4752]: E0122 12:22:10.099804 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:22:10 crc kubenswrapper[4752]: I0122 12:22:10.362961 4752 generic.go:334] "Generic (PLEG): container finished" podID="e079238e-454f-435a-86ba-d06c27c70d0d" containerID="14a0eb760026777f4d4ac830071e7e1afc1ad3a9c4a7f2d2e4cff2a09d7572f5" exitCode=0 Jan 22 12:22:10 crc kubenswrapper[4752]: I0122 12:22:10.363072 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sv5z" event={"ID":"e079238e-454f-435a-86ba-d06c27c70d0d","Type":"ContainerDied","Data":"14a0eb760026777f4d4ac830071e7e1afc1ad3a9c4a7f2d2e4cff2a09d7572f5"} Jan 22 12:22:10 crc kubenswrapper[4752]: I0122 12:22:10.365708 4752 generic.go:334] "Generic (PLEG): container finished" podID="9d960903-8470-4efb-84be-5fc5ada5fba2" containerID="45d8aad582224676ff1b0c0722d41f7147f01f1afc8ad8c268e0c726e1f0e685" exitCode=0 Jan 22 12:22:10 crc kubenswrapper[4752]: I0122 12:22:10.365748 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nj8c" event={"ID":"9d960903-8470-4efb-84be-5fc5ada5fba2","Type":"ContainerDied","Data":"45d8aad582224676ff1b0c0722d41f7147f01f1afc8ad8c268e0c726e1f0e685"} Jan 22 12:22:10 crc kubenswrapper[4752]: I0122 12:22:10.365770 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nj8c" event={"ID":"9d960903-8470-4efb-84be-5fc5ada5fba2","Type":"ContainerStarted","Data":"845a47bdaf0906f533aaf958e0e1d079385fa3e679f64ad7100fcdae5d8ca54a"} Jan 22 12:22:11 crc kubenswrapper[4752]: I0122 12:22:11.378360 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sv5z" event={"ID":"e079238e-454f-435a-86ba-d06c27c70d0d","Type":"ContainerStarted","Data":"18bf45c84ca6ea26d2d3861ad701329e998b7a1e1df67f74fd6a9a068b59636e"} Jan 22 12:22:11 crc kubenswrapper[4752]: I0122 12:22:11.386596 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nj8c" event={"ID":"9d960903-8470-4efb-84be-5fc5ada5fba2","Type":"ContainerStarted","Data":"f01238d7971422393feb6adb2b9100ff0c9f727bf914361918d59d335259d56e"} Jan 22 12:22:11 crc kubenswrapper[4752]: I0122 12:22:11.407035 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4sv5z" podStartSLOduration=2.9547273179999998 podStartE2EDuration="5.407020051s" podCreationTimestamp="2026-01-22 12:22:06 +0000 UTC" firstStartedPulling="2026-01-22 12:22:08.308532539 +0000 UTC m=+7007.538475447" lastFinishedPulling="2026-01-22 12:22:10.760825252 +0000 UTC m=+7009.990768180" observedRunningTime="2026-01-22 12:22:11.397763178 +0000 UTC m=+7010.627706086" watchObservedRunningTime="2026-01-22 12:22:11.407020051 +0000 UTC m=+7010.636962959" Jan 22 12:22:15 crc kubenswrapper[4752]: I0122 12:22:15.433700 4752 generic.go:334] "Generic (PLEG): container finished" podID="9d960903-8470-4efb-84be-5fc5ada5fba2" containerID="f01238d7971422393feb6adb2b9100ff0c9f727bf914361918d59d335259d56e" exitCode=0 Jan 22 12:22:15 crc kubenswrapper[4752]: I0122 12:22:15.433914 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nj8c" event={"ID":"9d960903-8470-4efb-84be-5fc5ada5fba2","Type":"ContainerDied","Data":"f01238d7971422393feb6adb2b9100ff0c9f727bf914361918d59d335259d56e"} Jan 22 12:22:17 crc kubenswrapper[4752]: I0122 12:22:17.263975 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4sv5z" Jan 22 12:22:17 crc kubenswrapper[4752]: I0122 12:22:17.264422 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4sv5z" Jan 22 12:22:17 crc kubenswrapper[4752]: I0122 12:22:17.315030 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4sv5z" Jan 22 12:22:17 crc kubenswrapper[4752]: I0122 12:22:17.457161 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nj8c" event={"ID":"9d960903-8470-4efb-84be-5fc5ada5fba2","Type":"ContainerStarted","Data":"2d178bf97b6e3d3d0b2029c499146b2ade2ebbcd42d9fb839f53c6272e4e9f68"} Jan 22 12:22:17 crc kubenswrapper[4752]: I0122 12:22:17.482316 4752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2nj8c" podStartSLOduration=3.58409408 podStartE2EDuration="9.482298626s" podCreationTimestamp="2026-01-22 12:22:08 +0000 UTC" firstStartedPulling="2026-01-22 12:22:10.368704209 +0000 UTC m=+7009.598647117" lastFinishedPulling="2026-01-22 12:22:16.266908745 +0000 UTC m=+7015.496851663" observedRunningTime="2026-01-22 12:22:17.480495108 +0000 UTC m=+7016.710438016" watchObservedRunningTime="2026-01-22 12:22:17.482298626 +0000 UTC m=+7016.712241534" Jan 22 12:22:17 crc kubenswrapper[4752]: I0122 12:22:17.513676 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4sv5z" Jan 22 12:22:18 crc kubenswrapper[4752]: I0122 12:22:18.581639 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sv5z"] Jan 22 12:22:19 crc kubenswrapper[4752]: I0122 12:22:19.028041 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2nj8c" Jan 22 12:22:19 crc kubenswrapper[4752]: I0122 12:22:19.028410 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2nj8c" Jan 22 12:22:19 crc kubenswrapper[4752]: I0122 12:22:19.478470 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4sv5z" podUID="e079238e-454f-435a-86ba-d06c27c70d0d" containerName="registry-server" containerID="cri-o://18bf45c84ca6ea26d2d3861ad701329e998b7a1e1df67f74fd6a9a068b59636e" gracePeriod=2 Jan 22 12:22:19 crc kubenswrapper[4752]: I0122 12:22:19.939511 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4sv5z" Jan 22 12:22:20 crc kubenswrapper[4752]: I0122 12:22:20.003993 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p29x2\" (UniqueName: \"kubernetes.io/projected/e079238e-454f-435a-86ba-d06c27c70d0d-kube-api-access-p29x2\") pod \"e079238e-454f-435a-86ba-d06c27c70d0d\" (UID: \"e079238e-454f-435a-86ba-d06c27c70d0d\") " Jan 22 12:22:20 crc kubenswrapper[4752]: I0122 12:22:20.004195 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e079238e-454f-435a-86ba-d06c27c70d0d-utilities\") pod \"e079238e-454f-435a-86ba-d06c27c70d0d\" (UID: \"e079238e-454f-435a-86ba-d06c27c70d0d\") " Jan 22 12:22:20 crc kubenswrapper[4752]: I0122 12:22:20.004231 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e079238e-454f-435a-86ba-d06c27c70d0d-catalog-content\") pod \"e079238e-454f-435a-86ba-d06c27c70d0d\" (UID: \"e079238e-454f-435a-86ba-d06c27c70d0d\") " Jan 22 12:22:20 crc kubenswrapper[4752]: I0122 12:22:20.008681 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e079238e-454f-435a-86ba-d06c27c70d0d-utilities" (OuterVolumeSpecName: "utilities") pod "e079238e-454f-435a-86ba-d06c27c70d0d" (UID: "e079238e-454f-435a-86ba-d06c27c70d0d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:22:20 crc kubenswrapper[4752]: I0122 12:22:20.014969 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e079238e-454f-435a-86ba-d06c27c70d0d-kube-api-access-p29x2" (OuterVolumeSpecName: "kube-api-access-p29x2") pod "e079238e-454f-435a-86ba-d06c27c70d0d" (UID: "e079238e-454f-435a-86ba-d06c27c70d0d"). InnerVolumeSpecName "kube-api-access-p29x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:22:20 crc kubenswrapper[4752]: I0122 12:22:20.025087 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e079238e-454f-435a-86ba-d06c27c70d0d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e079238e-454f-435a-86ba-d06c27c70d0d" (UID: "e079238e-454f-435a-86ba-d06c27c70d0d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:22:20 crc kubenswrapper[4752]: I0122 12:22:20.087614 4752 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2nj8c" podUID="9d960903-8470-4efb-84be-5fc5ada5fba2" containerName="registry-server" probeResult="failure" output=< Jan 22 12:22:20 crc kubenswrapper[4752]: timeout: failed to connect service ":50051" within 1s Jan 22 12:22:20 crc kubenswrapper[4752]: > Jan 22 12:22:20 crc kubenswrapper[4752]: I0122 12:22:20.106727 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e079238e-454f-435a-86ba-d06c27c70d0d-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:22:20 crc kubenswrapper[4752]: I0122 12:22:20.106778 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e079238e-454f-435a-86ba-d06c27c70d0d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:22:20 crc kubenswrapper[4752]: I0122 12:22:20.106799 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p29x2\" (UniqueName: \"kubernetes.io/projected/e079238e-454f-435a-86ba-d06c27c70d0d-kube-api-access-p29x2\") on node \"crc\" DevicePath \"\"" Jan 22 12:22:20 crc kubenswrapper[4752]: I0122 12:22:20.492299 4752 generic.go:334] "Generic (PLEG): container finished" podID="e079238e-454f-435a-86ba-d06c27c70d0d" containerID="18bf45c84ca6ea26d2d3861ad701329e998b7a1e1df67f74fd6a9a068b59636e" exitCode=0 Jan 22 12:22:20 crc kubenswrapper[4752]: I0122 12:22:20.492347 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sv5z" event={"ID":"e079238e-454f-435a-86ba-d06c27c70d0d","Type":"ContainerDied","Data":"18bf45c84ca6ea26d2d3861ad701329e998b7a1e1df67f74fd6a9a068b59636e"} Jan 22 12:22:20 crc kubenswrapper[4752]: I0122 12:22:20.492385 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sv5z" event={"ID":"e079238e-454f-435a-86ba-d06c27c70d0d","Type":"ContainerDied","Data":"f062afce83ffaa58214629bba6a8de38c22a307d493eb460abd18a1b9ff12c42"} Jan 22 12:22:20 crc kubenswrapper[4752]: I0122 12:22:20.492410 4752 scope.go:117] "RemoveContainer" containerID="18bf45c84ca6ea26d2d3861ad701329e998b7a1e1df67f74fd6a9a068b59636e" Jan 22 12:22:20 crc kubenswrapper[4752]: I0122 12:22:20.492442 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4sv5z" Jan 22 12:22:20 crc kubenswrapper[4752]: I0122 12:22:20.533244 4752 scope.go:117] "RemoveContainer" containerID="14a0eb760026777f4d4ac830071e7e1afc1ad3a9c4a7f2d2e4cff2a09d7572f5" Jan 22 12:22:20 crc kubenswrapper[4752]: I0122 12:22:20.538805 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sv5z"] Jan 22 12:22:20 crc kubenswrapper[4752]: I0122 12:22:20.558362 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sv5z"] Jan 22 12:22:20 crc kubenswrapper[4752]: I0122 12:22:20.563298 4752 scope.go:117] "RemoveContainer" containerID="630b66a0ca8abb16a92580e279f77a939ca36a5cb36d54396ce7a5b25f0f5234" Jan 22 12:22:20 crc kubenswrapper[4752]: I0122 12:22:20.624474 4752 scope.go:117] "RemoveContainer" containerID="18bf45c84ca6ea26d2d3861ad701329e998b7a1e1df67f74fd6a9a068b59636e" Jan 22 12:22:20 crc kubenswrapper[4752]: E0122 12:22:20.625004 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18bf45c84ca6ea26d2d3861ad701329e998b7a1e1df67f74fd6a9a068b59636e\": container with ID starting with 18bf45c84ca6ea26d2d3861ad701329e998b7a1e1df67f74fd6a9a068b59636e not found: ID does not exist" containerID="18bf45c84ca6ea26d2d3861ad701329e998b7a1e1df67f74fd6a9a068b59636e" Jan 22 12:22:20 crc kubenswrapper[4752]: I0122 12:22:20.625043 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18bf45c84ca6ea26d2d3861ad701329e998b7a1e1df67f74fd6a9a068b59636e"} err="failed to get container status \"18bf45c84ca6ea26d2d3861ad701329e998b7a1e1df67f74fd6a9a068b59636e\": rpc error: code = NotFound desc = could not find container \"18bf45c84ca6ea26d2d3861ad701329e998b7a1e1df67f74fd6a9a068b59636e\": container with ID starting with 18bf45c84ca6ea26d2d3861ad701329e998b7a1e1df67f74fd6a9a068b59636e not found: ID does not exist" Jan 22 12:22:20 crc kubenswrapper[4752]: I0122 12:22:20.625070 4752 scope.go:117] "RemoveContainer" containerID="14a0eb760026777f4d4ac830071e7e1afc1ad3a9c4a7f2d2e4cff2a09d7572f5" Jan 22 12:22:20 crc kubenswrapper[4752]: E0122 12:22:20.626649 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14a0eb760026777f4d4ac830071e7e1afc1ad3a9c4a7f2d2e4cff2a09d7572f5\": container with ID starting with 14a0eb760026777f4d4ac830071e7e1afc1ad3a9c4a7f2d2e4cff2a09d7572f5 not found: ID does not exist" containerID="14a0eb760026777f4d4ac830071e7e1afc1ad3a9c4a7f2d2e4cff2a09d7572f5" Jan 22 12:22:20 crc kubenswrapper[4752]: I0122 12:22:20.626698 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14a0eb760026777f4d4ac830071e7e1afc1ad3a9c4a7f2d2e4cff2a09d7572f5"} err="failed to get container status \"14a0eb760026777f4d4ac830071e7e1afc1ad3a9c4a7f2d2e4cff2a09d7572f5\": rpc error: code = NotFound desc = could not find container \"14a0eb760026777f4d4ac830071e7e1afc1ad3a9c4a7f2d2e4cff2a09d7572f5\": container with ID starting with 14a0eb760026777f4d4ac830071e7e1afc1ad3a9c4a7f2d2e4cff2a09d7572f5 not found: ID does not exist" Jan 22 12:22:20 crc kubenswrapper[4752]: I0122 12:22:20.626718 4752 scope.go:117] "RemoveContainer" containerID="630b66a0ca8abb16a92580e279f77a939ca36a5cb36d54396ce7a5b25f0f5234" Jan 22 12:22:20 crc kubenswrapper[4752]: E0122 12:22:20.629146 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"630b66a0ca8abb16a92580e279f77a939ca36a5cb36d54396ce7a5b25f0f5234\": container with ID starting with 630b66a0ca8abb16a92580e279f77a939ca36a5cb36d54396ce7a5b25f0f5234 not found: ID does not exist" containerID="630b66a0ca8abb16a92580e279f77a939ca36a5cb36d54396ce7a5b25f0f5234" Jan 22 12:22:20 crc kubenswrapper[4752]: I0122 12:22:20.629173 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"630b66a0ca8abb16a92580e279f77a939ca36a5cb36d54396ce7a5b25f0f5234"} err="failed to get container status \"630b66a0ca8abb16a92580e279f77a939ca36a5cb36d54396ce7a5b25f0f5234\": rpc error: code = NotFound desc = could not find container \"630b66a0ca8abb16a92580e279f77a939ca36a5cb36d54396ce7a5b25f0f5234\": container with ID starting with 630b66a0ca8abb16a92580e279f77a939ca36a5cb36d54396ce7a5b25f0f5234 not found: ID does not exist" Jan 22 12:22:21 crc kubenswrapper[4752]: I0122 12:22:21.109504 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e079238e-454f-435a-86ba-d06c27c70d0d" path="/var/lib/kubelet/pods/e079238e-454f-435a-86ba-d06c27c70d0d/volumes" Jan 22 12:22:24 crc kubenswrapper[4752]: I0122 12:22:24.099842 4752 scope.go:117] "RemoveContainer" containerID="722914d5b6484ced4833b41bf108d126b645f7169e2294629e0c22b5735c83fb" Jan 22 12:22:24 crc kubenswrapper[4752]: E0122 12:22:24.100926 4752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6hm8_openshift-machine-config-operator(eb8df70c-9474-4827-8831-f39fc6883d79)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" podUID="eb8df70c-9474-4827-8831-f39fc6883d79" Jan 22 12:22:29 crc kubenswrapper[4752]: I0122 12:22:29.082784 4752 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2nj8c" Jan 22 12:22:29 crc kubenswrapper[4752]: I0122 12:22:29.143045 4752 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2nj8c" Jan 22 12:22:29 crc kubenswrapper[4752]: I0122 12:22:29.326837 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2nj8c"] Jan 22 12:22:30 crc kubenswrapper[4752]: I0122 12:22:30.595514 4752 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2nj8c" podUID="9d960903-8470-4efb-84be-5fc5ada5fba2" containerName="registry-server" containerID="cri-o://2d178bf97b6e3d3d0b2029c499146b2ade2ebbcd42d9fb839f53c6272e4e9f68" gracePeriod=2 Jan 22 12:22:31 crc kubenswrapper[4752]: I0122 12:22:31.114438 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2nj8c" Jan 22 12:22:31 crc kubenswrapper[4752]: I0122 12:22:31.304799 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d960903-8470-4efb-84be-5fc5ada5fba2-utilities\") pod \"9d960903-8470-4efb-84be-5fc5ada5fba2\" (UID: \"9d960903-8470-4efb-84be-5fc5ada5fba2\") " Jan 22 12:22:31 crc kubenswrapper[4752]: I0122 12:22:31.304977 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d960903-8470-4efb-84be-5fc5ada5fba2-catalog-content\") pod \"9d960903-8470-4efb-84be-5fc5ada5fba2\" (UID: \"9d960903-8470-4efb-84be-5fc5ada5fba2\") " Jan 22 12:22:31 crc kubenswrapper[4752]: I0122 12:22:31.305064 4752 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjn9b\" (UniqueName: \"kubernetes.io/projected/9d960903-8470-4efb-84be-5fc5ada5fba2-kube-api-access-sjn9b\") pod \"9d960903-8470-4efb-84be-5fc5ada5fba2\" (UID: \"9d960903-8470-4efb-84be-5fc5ada5fba2\") " Jan 22 12:22:31 crc kubenswrapper[4752]: I0122 12:22:31.305914 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d960903-8470-4efb-84be-5fc5ada5fba2-utilities" (OuterVolumeSpecName: "utilities") pod "9d960903-8470-4efb-84be-5fc5ada5fba2" (UID: "9d960903-8470-4efb-84be-5fc5ada5fba2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:22:31 crc kubenswrapper[4752]: I0122 12:22:31.313015 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d960903-8470-4efb-84be-5fc5ada5fba2-kube-api-access-sjn9b" (OuterVolumeSpecName: "kube-api-access-sjn9b") pod "9d960903-8470-4efb-84be-5fc5ada5fba2" (UID: "9d960903-8470-4efb-84be-5fc5ada5fba2"). InnerVolumeSpecName "kube-api-access-sjn9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:22:31 crc kubenswrapper[4752]: I0122 12:22:31.408120 4752 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjn9b\" (UniqueName: \"kubernetes.io/projected/9d960903-8470-4efb-84be-5fc5ada5fba2-kube-api-access-sjn9b\") on node \"crc\" DevicePath \"\"" Jan 22 12:22:31 crc kubenswrapper[4752]: I0122 12:22:31.408154 4752 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d960903-8470-4efb-84be-5fc5ada5fba2-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:22:31 crc kubenswrapper[4752]: I0122 12:22:31.441164 4752 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d960903-8470-4efb-84be-5fc5ada5fba2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d960903-8470-4efb-84be-5fc5ada5fba2" (UID: "9d960903-8470-4efb-84be-5fc5ada5fba2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:22:31 crc kubenswrapper[4752]: I0122 12:22:31.512077 4752 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d960903-8470-4efb-84be-5fc5ada5fba2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:22:31 crc kubenswrapper[4752]: I0122 12:22:31.607682 4752 generic.go:334] "Generic (PLEG): container finished" podID="9d960903-8470-4efb-84be-5fc5ada5fba2" containerID="2d178bf97b6e3d3d0b2029c499146b2ade2ebbcd42d9fb839f53c6272e4e9f68" exitCode=0 Jan 22 12:22:31 crc kubenswrapper[4752]: I0122 12:22:31.607727 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nj8c" event={"ID":"9d960903-8470-4efb-84be-5fc5ada5fba2","Type":"ContainerDied","Data":"2d178bf97b6e3d3d0b2029c499146b2ade2ebbcd42d9fb839f53c6272e4e9f68"} Jan 22 12:22:31 crc kubenswrapper[4752]: I0122 12:22:31.607757 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nj8c" event={"ID":"9d960903-8470-4efb-84be-5fc5ada5fba2","Type":"ContainerDied","Data":"845a47bdaf0906f533aaf958e0e1d079385fa3e679f64ad7100fcdae5d8ca54a"} Jan 22 12:22:31 crc kubenswrapper[4752]: I0122 12:22:31.607778 4752 scope.go:117] "RemoveContainer" containerID="2d178bf97b6e3d3d0b2029c499146b2ade2ebbcd42d9fb839f53c6272e4e9f68" Jan 22 12:22:31 crc kubenswrapper[4752]: I0122 12:22:31.607950 4752 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2nj8c" Jan 22 12:22:31 crc kubenswrapper[4752]: I0122 12:22:31.643445 4752 scope.go:117] "RemoveContainer" containerID="f01238d7971422393feb6adb2b9100ff0c9f727bf914361918d59d335259d56e" Jan 22 12:22:31 crc kubenswrapper[4752]: I0122 12:22:31.676492 4752 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2nj8c"] Jan 22 12:22:31 crc kubenswrapper[4752]: I0122 12:22:31.686159 4752 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2nj8c"] Jan 22 12:22:31 crc kubenswrapper[4752]: I0122 12:22:31.695650 4752 scope.go:117] "RemoveContainer" containerID="45d8aad582224676ff1b0c0722d41f7147f01f1afc8ad8c268e0c726e1f0e685" Jan 22 12:22:31 crc kubenswrapper[4752]: I0122 12:22:31.724439 4752 scope.go:117] "RemoveContainer" containerID="2d178bf97b6e3d3d0b2029c499146b2ade2ebbcd42d9fb839f53c6272e4e9f68" Jan 22 12:22:31 crc kubenswrapper[4752]: E0122 12:22:31.725307 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d178bf97b6e3d3d0b2029c499146b2ade2ebbcd42d9fb839f53c6272e4e9f68\": container with ID starting with 2d178bf97b6e3d3d0b2029c499146b2ade2ebbcd42d9fb839f53c6272e4e9f68 not found: ID does not exist" containerID="2d178bf97b6e3d3d0b2029c499146b2ade2ebbcd42d9fb839f53c6272e4e9f68" Jan 22 12:22:31 crc kubenswrapper[4752]: I0122 12:22:31.725386 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d178bf97b6e3d3d0b2029c499146b2ade2ebbcd42d9fb839f53c6272e4e9f68"} err="failed to get container status \"2d178bf97b6e3d3d0b2029c499146b2ade2ebbcd42d9fb839f53c6272e4e9f68\": rpc error: code = NotFound desc = could not find container \"2d178bf97b6e3d3d0b2029c499146b2ade2ebbcd42d9fb839f53c6272e4e9f68\": container with ID starting with 2d178bf97b6e3d3d0b2029c499146b2ade2ebbcd42d9fb839f53c6272e4e9f68 not found: ID does not exist" Jan 22 12:22:31 crc kubenswrapper[4752]: I0122 12:22:31.725423 4752 scope.go:117] "RemoveContainer" containerID="f01238d7971422393feb6adb2b9100ff0c9f727bf914361918d59d335259d56e" Jan 22 12:22:31 crc kubenswrapper[4752]: E0122 12:22:31.725829 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f01238d7971422393feb6adb2b9100ff0c9f727bf914361918d59d335259d56e\": container with ID starting with f01238d7971422393feb6adb2b9100ff0c9f727bf914361918d59d335259d56e not found: ID does not exist" containerID="f01238d7971422393feb6adb2b9100ff0c9f727bf914361918d59d335259d56e" Jan 22 12:22:31 crc kubenswrapper[4752]: I0122 12:22:31.725880 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f01238d7971422393feb6adb2b9100ff0c9f727bf914361918d59d335259d56e"} err="failed to get container status \"f01238d7971422393feb6adb2b9100ff0c9f727bf914361918d59d335259d56e\": rpc error: code = NotFound desc = could not find container \"f01238d7971422393feb6adb2b9100ff0c9f727bf914361918d59d335259d56e\": container with ID starting with f01238d7971422393feb6adb2b9100ff0c9f727bf914361918d59d335259d56e not found: ID does not exist" Jan 22 12:22:31 crc kubenswrapper[4752]: I0122 12:22:31.725909 4752 scope.go:117] "RemoveContainer" containerID="45d8aad582224676ff1b0c0722d41f7147f01f1afc8ad8c268e0c726e1f0e685" Jan 22 12:22:31 crc kubenswrapper[4752]: E0122 12:22:31.726288 4752 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45d8aad582224676ff1b0c0722d41f7147f01f1afc8ad8c268e0c726e1f0e685\": container with ID starting with 45d8aad582224676ff1b0c0722d41f7147f01f1afc8ad8c268e0c726e1f0e685 not found: ID does not exist" containerID="45d8aad582224676ff1b0c0722d41f7147f01f1afc8ad8c268e0c726e1f0e685" Jan 22 12:22:31 crc kubenswrapper[4752]: I0122 12:22:31.726328 4752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45d8aad582224676ff1b0c0722d41f7147f01f1afc8ad8c268e0c726e1f0e685"} err="failed to get container status \"45d8aad582224676ff1b0c0722d41f7147f01f1afc8ad8c268e0c726e1f0e685\": rpc error: code = NotFound desc = could not find container \"45d8aad582224676ff1b0c0722d41f7147f01f1afc8ad8c268e0c726e1f0e685\": container with ID starting with 45d8aad582224676ff1b0c0722d41f7147f01f1afc8ad8c268e0c726e1f0e685 not found: ID does not exist" Jan 22 12:22:33 crc kubenswrapper[4752]: I0122 12:22:33.108524 4752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d960903-8470-4efb-84be-5fc5ada5fba2" path="/var/lib/kubelet/pods/9d960903-8470-4efb-84be-5fc5ada5fba2/volumes" Jan 22 12:22:36 crc kubenswrapper[4752]: I0122 12:22:36.098051 4752 scope.go:117] "RemoveContainer" containerID="722914d5b6484ced4833b41bf108d126b645f7169e2294629e0c22b5735c83fb" Jan 22 12:22:36 crc kubenswrapper[4752]: I0122 12:22:36.680468 4752 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6hm8" event={"ID":"eb8df70c-9474-4827-8831-f39fc6883d79","Type":"ContainerStarted","Data":"e7343b930f4f2ae455997c61d7418175b83a5d16de20e6be536777c4466058f2"}